Test Report: Docker_Linux_containerd 14079

                    
                      798c4e8fed290cfa318a9fb994a7c6f555db39c1:2022-06-01:24216
                    
                

Test fail (14/267)

x
+
TestNetworkPlugins/group/auto/Start (483.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p auto-20220601104837-6708 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestNetworkPlugins/group/auto/Start
net_test.go:101: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p auto-20220601104837-6708 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --driver=docker  --container-runtime=containerd: exit status 80 (8m3.078588186s)

                                                
                                                
-- stdout --
	* [auto-20220601104837-6708] minikube v1.26.0-beta.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=14079
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	* Using the docker driver based on user configuration
	* Using Docker driver with the root privilege
	* Starting control plane node auto-20220601104837-6708 in cluster auto-20220601104837-6708
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	* docker "auto-20220601104837-6708" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	* Preparing Kubernetes v1.23.6 on containerd 1.6.4 ...
	  - kubelet.cni-conf-dir=/etc/cni/net.mk
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner, default-storageclass
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0601 10:55:19.598108  183213 out.go:296] Setting OutFile to fd 1 ...
	I0601 10:55:19.598213  183213 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0601 10:55:19.598222  183213 out.go:309] Setting ErrFile to fd 2...
	I0601 10:55:19.598226  183213 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0601 10:55:19.598322  183213 root.go:322] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/bin
	I0601 10:55:19.598580  183213 out.go:303] Setting JSON to false
	I0601 10:55:19.599755  183213 start.go:115] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":2274,"bootTime":1654078646,"procs":597,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.13.0-1027-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0601 10:55:19.599813  183213 start.go:125] virtualization: kvm guest
	I0601 10:55:19.602285  183213 out.go:177] * [auto-20220601104837-6708] minikube v1.26.0-beta.1 on Ubuntu 20.04 (kvm/amd64)
	I0601 10:55:19.603788  183213 out.go:177]   - MINIKUBE_LOCATION=14079
	I0601 10:55:19.603745  183213 notify.go:193] Checking for updates...
	I0601 10:55:19.605164  183213 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0601 10:55:19.606395  183213 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig
	I0601 10:55:19.607730  183213 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube
	I0601 10:55:19.609023  183213 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0601 10:55:19.611815  183213 config.go:178] Loaded profile config "cert-expiration-20220601105338-6708": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.6
	I0601 10:55:19.612237  183213 config.go:178] Loaded profile config "force-systemd-flag-20220601105435-6708": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.6
	I0601 10:55:19.612334  183213 config.go:178] Loaded profile config "running-upgrade-20220601105304-6708": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
	I0601 10:55:19.612372  183213 driver.go:358] Setting default libvirt URI to qemu:///system
	I0601 10:55:19.651756  183213 docker.go:137] docker version: linux-20.10.16
	I0601 10:55:19.651854  183213 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0601 10:55:19.749557  183213 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:76 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:43 OomKillDisable:true NGoroutines:44 SystemTime:2022-06-01 10:55:19.680574494 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1027-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662795776 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:20.10.16 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0601 10:55:19.749653  183213 docker.go:254] overlay module found
	I0601 10:55:19.751956  183213 out.go:177] * Using the docker driver based on user configuration
	I0601 10:55:19.753293  183213 start.go:284] selected driver: docker
	I0601 10:55:19.753304  183213 start.go:806] validating driver "docker" against <nil>
	I0601 10:55:19.753324  183213 start.go:817] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0601 10:55:19.754107  183213 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0601 10:55:19.850577  183213 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:76 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:43 OomKillDisable:true NGoroutines:44 SystemTime:2022-06-01 10:55:19.782022857 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1027-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662795776 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:20.10.16 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0601 10:55:19.850692  183213 start_flags.go:292] no existing cluster config was found, will generate one from the flags 
	I0601 10:55:19.850837  183213 start_flags.go:847] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0601 10:55:19.852930  183213 out.go:177] * Using Docker driver with the root privilege
	I0601 10:55:19.854137  183213 cni.go:95] Creating CNI manager for ""
	I0601 10:55:19.854153  183213 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0601 10:55:19.854164  183213 cni.go:225] auto-setting extra-config to "kubelet.cni-conf-dir=/etc/cni/net.mk"
	I0601 10:55:19.854173  183213 cni.go:230] extra-config set to "kubelet.cni-conf-dir=/etc/cni/net.mk"
	I0601 10:55:19.854178  183213 start_flags.go:301] Found "CNI" CNI - setting NetworkPlugin=cni
	I0601 10:55:19.854189  183213 start_flags.go:306] config:
	{Name:auto-20220601104837-6708 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:auto-20220601104837-6708 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0601 10:55:19.855726  183213 out.go:177] * Starting control plane node auto-20220601104837-6708 in cluster auto-20220601104837-6708
	I0601 10:55:19.856945  183213 cache.go:120] Beginning downloading kic base image for docker with containerd
	I0601 10:55:19.858154  183213 out.go:177] * Pulling base image ...
	I0601 10:55:19.859427  183213 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime containerd
	I0601 10:55:19.859456  183213 preload.go:148] Found local preload: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.6-containerd-overlay2-amd64.tar.lz4
	I0601 10:55:19.859468  183213 cache.go:57] Caching tarball of preloaded images
	I0601 10:55:19.859459  183213 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a in local docker daemon
	I0601 10:55:19.859657  183213 preload.go:174] Found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.6-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0601 10:55:19.859671  183213 cache.go:60] Finished verifying existence of preloaded tar for  v1.23.6 on containerd
	I0601 10:55:19.859760  183213 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/auto-20220601104837-6708/config.json ...
	I0601 10:55:19.859782  183213 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/auto-20220601104837-6708/config.json: {Name:mkf525e10e38fedab80c837d2396febb7a245625 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0601 10:55:19.902491  183213 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a in local docker daemon, skipping pull
	I0601 10:55:19.902513  183213 cache.go:141] gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a exists in daemon, skipping load
	I0601 10:55:19.902528  183213 cache.go:206] Successfully downloaded all kic artifacts
	I0601 10:55:19.902552  183213 start.go:352] acquiring machines lock for auto-20220601104837-6708: {Name:mk529476ef8e3a117be045b28bafd38c3a10386b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0601 10:55:19.902658  183213 start.go:356] acquired machines lock for "auto-20220601104837-6708" in 85.245µs
	I0601 10:55:19.902680  183213 start.go:91] Provisioning new machine with config: &{Name:auto-20220601104837-6708 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:auto-20220601104837-6708 Namespace:default APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false} &{Name: IP: Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0601 10:55:19.902751  183213 start.go:131] createHost starting for "" (driver="docker")
	I0601 10:55:19.904847  183213 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0601 10:55:19.905047  183213 start.go:165] libmachine.API.Create for "auto-20220601104837-6708" (driver="docker")
	I0601 10:55:19.905076  183213 client.go:168] LocalClient.Create starting
	I0601 10:55:19.905129  183213 main.go:134] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca.pem
	I0601 10:55:19.905156  183213 main.go:134] libmachine: Decoding PEM data...
	I0601 10:55:19.905171  183213 main.go:134] libmachine: Parsing certificate...
	I0601 10:55:19.905220  183213 main.go:134] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/cert.pem
	I0601 10:55:19.905256  183213 main.go:134] libmachine: Decoding PEM data...
	I0601 10:55:19.905264  183213 main.go:134] libmachine: Parsing certificate...
	I0601 10:55:19.905556  183213 cli_runner.go:164] Run: docker network inspect auto-20220601104837-6708 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0601 10:55:19.935657  183213 cli_runner.go:211] docker network inspect auto-20220601104837-6708 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0601 10:55:19.935725  183213 network_create.go:272] running [docker network inspect auto-20220601104837-6708] to gather additional debugging logs...
	I0601 10:55:19.935742  183213 cli_runner.go:164] Run: docker network inspect auto-20220601104837-6708
	W0601 10:55:19.966318  183213 cli_runner.go:211] docker network inspect auto-20220601104837-6708 returned with exit code 1
	I0601 10:55:19.966355  183213 network_create.go:275] error running [docker network inspect auto-20220601104837-6708]: docker network inspect auto-20220601104837-6708: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: auto-20220601104837-6708
	I0601 10:55:19.966365  183213 network_create.go:277] output of [docker network inspect auto-20220601104837-6708]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: auto-20220601104837-6708
	
	** /stderr **
	I0601 10:55:19.966407  183213 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0601 10:55:19.997192  183213 network.go:240] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName:br-0e14ad5677a3 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:c3:97:32:70}}
	I0601 10:55:19.997956  183213 network.go:240] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName:br-f0d3a82d2df1 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:02:42:2d:ae:0e:7a}}
	I0601 10:55:19.998825  183213 network.go:288] reserving subnet 192.168.67.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.67.0:0xc000798968] misses:0}
	I0601 10:55:19.998863  183213 network.go:235] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0601 10:55:19.998885  183213 network_create.go:115] attempt to create docker network auto-20220601104837-6708 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 1500 ...
	I0601 10:55:19.998944  183213 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true auto-20220601104837-6708
	I0601 10:55:20.065184  183213 network_create.go:99] docker network auto-20220601104837-6708 192.168.67.0/24 created
	I0601 10:55:20.065213  183213 kic.go:106] calculated static IP "192.168.67.2" for the "auto-20220601104837-6708" container
	I0601 10:55:20.065278  183213 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0601 10:55:20.099305  183213 cli_runner.go:164] Run: docker volume create auto-20220601104837-6708 --label name.minikube.sigs.k8s.io=auto-20220601104837-6708 --label created_by.minikube.sigs.k8s.io=true
	I0601 10:55:20.130610  183213 oci.go:103] Successfully created a docker volume auto-20220601104837-6708
	I0601 10:55:20.130694  183213 cli_runner.go:164] Run: docker run --rm --name auto-20220601104837-6708-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-20220601104837-6708 --entrypoint /usr/bin/test -v auto-20220601104837-6708:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a -d /var/lib
	I0601 10:55:20.917035  183213 oci.go:107] Successfully prepared a docker volume auto-20220601104837-6708
	I0601 10:55:20.917083  183213 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime containerd
	I0601 10:55:20.917104  183213 kic.go:179] Starting extracting preloaded images to volume ...
	I0601 10:55:20.917154  183213 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.6-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v auto-20220601104837-6708:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a -I lz4 -xf /preloaded.tar -C /extractDir
	I0601 10:55:26.678931  183213 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.6-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v auto-20220601104837-6708:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a -I lz4 -xf /preloaded.tar -C /extractDir: (5.761715487s)
	I0601 10:55:26.678962  183213 kic.go:188] duration metric: took 5.761853 seconds to extract preloaded images to volume
	W0601 10:55:26.679101  183213 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0601 10:55:26.679222  183213 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0601 10:55:26.811103  183213 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname auto-20220601104837-6708 --name auto-20220601104837-6708 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-20220601104837-6708 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=auto-20220601104837-6708 --network auto-20220601104837-6708 --ip 192.168.67.2 --volume auto-20220601104837-6708:/var --security-opt apparmor=unconfined --memory=2048mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a
	W0601 10:55:26.878743  183213 cli_runner.go:211] docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname auto-20220601104837-6708 --name auto-20220601104837-6708 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-20220601104837-6708 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=auto-20220601104837-6708 --network auto-20220601104837-6708 --ip 192.168.67.2 --volume auto-20220601104837-6708:/var --security-opt apparmor=unconfined --memory=2048mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a returned with exit code 125
	I0601 10:55:26.878821  183213 client.go:171] LocalClient.Create took 6.973734698s
	I0601 10:55:28.880014  183213 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0601 10:55:28.880081  183213 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220601104837-6708
	W0601 10:55:28.911765  183213 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220601104837-6708 returned with exit code 1
	I0601 10:55:28.911911  183213 retry.go:31] will retry after 276.165072ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0601 10:55:29.188378  183213 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220601104837-6708
	W0601 10:55:29.218607  183213 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220601104837-6708 returned with exit code 1
	I0601 10:55:29.218705  183213 retry.go:31] will retry after 540.190908ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0601 10:55:29.759498  183213 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220601104837-6708
	W0601 10:55:29.792209  183213 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220601104837-6708 returned with exit code 1
	I0601 10:55:29.794088  183213 retry.go:31] will retry after 655.06503ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0601 10:55:30.449930  183213 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220601104837-6708
	W0601 10:55:30.480807  183213 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220601104837-6708 returned with exit code 1
	W0601 10:55:30.480917  183213 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	W0601 10:55:30.480942  183213 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0601 10:55:30.480975  183213 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0601 10:55:30.481006  183213 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220601104837-6708
	W0601 10:55:30.510565  183213 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220601104837-6708 returned with exit code 1
	I0601 10:55:30.510683  183213 retry.go:31] will retry after 231.159374ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0601 10:55:30.742107  183213 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220601104837-6708
	W0601 10:55:30.773462  183213 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220601104837-6708 returned with exit code 1
	I0601 10:55:30.773562  183213 retry.go:31] will retry after 445.058653ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0601 10:55:31.219174  183213 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220601104837-6708
	W0601 10:55:31.250336  183213 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220601104837-6708 returned with exit code 1
	I0601 10:55:31.250439  183213 retry.go:31] will retry after 318.170823ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0601 10:55:31.568935  183213 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220601104837-6708
	W0601 10:55:31.600371  183213 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220601104837-6708 returned with exit code 1
	I0601 10:55:31.600486  183213 retry.go:31] will retry after 553.938121ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0601 10:55:32.155273  183213 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220601104837-6708
	W0601 10:55:32.186231  183213 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220601104837-6708 returned with exit code 1
	W0601 10:55:32.186355  183213 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	W0601 10:55:32.186372  183213 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0601 10:55:32.186381  183213 start.go:134] duration metric: createHost completed in 12.283624142s
	I0601 10:55:32.186393  183213 start.go:81] releasing machines lock for "auto-20220601104837-6708", held for 12.283723059s
	W0601 10:55:32.186426  183213 start.go:599] error starting host: creating host: create: creating: create kic node: create container: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname auto-20220601104837-6708 --name auto-20220601104837-6708 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-20220601104837-6708 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=auto-20220601104837-6708 --network auto-20220601104837-6708 --ip 192.168.67.2 --volume auto-20220601104837-6708:/var --security-opt apparmor=unconfined --memory=2048mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a: exit status 125
	stdout:
	d73c1f6fa0c815e3217f0f9bfb3aade7a6ea37e5b249c5e9dd2a51afdc1170b9
	
	stderr:
	docker: Error response from daemon: network auto-20220601104837-6708 not found.
	I0601 10:55:32.186997  183213 cli_runner.go:164] Run: docker container inspect auto-20220601104837-6708 --format={{.State.Status}}
	W0601 10:55:32.217122  183213 start.go:604] delete host: Docker machine "auto-20220601104837-6708" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	W0601 10:55:32.217337  183213 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: create kic node: create container: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname auto-20220601104837-6708 --name auto-20220601104837-6708 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-20220601104837-6708 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=auto-20220601104837-6708 --network auto-20220601104837-6708 --ip 192.168.67.2 --volume auto-20220601104837-6708:/var --security-opt apparmor=unconfined --memory=2048mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a: exit status 125
	stdout:
	d73c1f6fa0c815e3217f0f9bfb3aade7a6ea37e5b249c5e9dd2a51afdc1170b9
	
	stderr:
	docker: Error response from daemon: network auto-20220601104837-6708 not found.
	
	! StartHost failed, but will try again: creating host: create: creating: create kic node: create container: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname auto-20220601104837-6708 --name auto-20220601104837-6708 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-20220601104837-6708 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=auto-20220601104837-6708 --network auto-20220601104837-6708 --ip 192.168.67.2 --volume auto-20220601104837-6708:/var --security-opt apparmor=unconfined --memory=2048mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a: exit status 125
	stdout:
	d73c1f6fa0c815e3217f0f9bfb3aade7a6ea37e5b249c5e9dd2a51afdc1170b9
	
	stderr:
	docker: Error response from daemon: network auto-20220601104837-6708 not found.
	
	I0601 10:55:32.217361  183213 start.go:614] Will try again in 5 seconds ...
	I0601 10:55:37.219341  183213 start.go:352] acquiring machines lock for auto-20220601104837-6708: {Name:mk529476ef8e3a117be045b28bafd38c3a10386b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0601 10:55:37.219475  183213 start.go:356] acquired machines lock for "auto-20220601104837-6708" in 76.37µs
	I0601 10:55:37.219496  183213 start.go:94] Skipping create...Using existing machine configuration
	I0601 10:55:37.219501  183213 fix.go:55] fixHost starting: 
	I0601 10:55:37.219715  183213 cli_runner.go:164] Run: docker container inspect auto-20220601104837-6708 --format={{.State.Status}}
	I0601 10:55:37.249583  183213 fix.go:103] recreateIfNeeded on auto-20220601104837-6708: state= err=<nil>
	I0601 10:55:37.249617  183213 fix.go:108] machineExists: false. err=machine does not exist
	I0601 10:55:37.252296  183213 out.go:177] * docker "auto-20220601104837-6708" container is missing, will recreate.
	I0601 10:55:37.253608  183213 delete.go:124] DEMOLISHING auto-20220601104837-6708 ...
	I0601 10:55:37.253673  183213 cli_runner.go:164] Run: docker container inspect auto-20220601104837-6708 --format={{.State.Status}}
	I0601 10:55:37.283685  183213 stop.go:79] host is in state 
	I0601 10:55:37.283733  183213 main.go:134] libmachine: Stopping "auto-20220601104837-6708"...
	I0601 10:55:37.283783  183213 cli_runner.go:164] Run: docker container inspect auto-20220601104837-6708 --format={{.State.Status}}
	I0601 10:55:37.312685  183213 kic_runner.go:93] Run: systemctl --version
	I0601 10:55:37.312713  183213 kic_runner.go:114] Args: [docker exec --privileged auto-20220601104837-6708 systemctl --version]
	I0601 10:55:37.341655  183213 kic_runner.go:93] Run: sudo service kubelet stop
	I0601 10:55:37.341681  183213 kic_runner.go:114] Args: [docker exec --privileged auto-20220601104837-6708 sudo service kubelet stop]
	I0601 10:55:37.371369  183213 openrc.go:165] stop output: 
	** stderr ** 
	Error response from daemon: Container d73c1f6fa0c815e3217f0f9bfb3aade7a6ea37e5b249c5e9dd2a51afdc1170b9 is not running
	
	** /stderr **
	W0601 10:55:37.371392  183213 kic.go:439] couldn't stop kubelet. will continue with stop anyways: sudo service kubelet stop: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Container d73c1f6fa0c815e3217f0f9bfb3aade7a6ea37e5b249c5e9dd2a51afdc1170b9 is not running
	I0601 10:55:37.371440  183213 kic_runner.go:93] Run: sudo service kubelet stop
	I0601 10:55:37.371454  183213 kic_runner.go:114] Args: [docker exec --privileged auto-20220601104837-6708 sudo service kubelet stop]
	I0601 10:55:37.402549  183213 openrc.go:165] stop output: 
	** stderr ** 
	Error response from daemon: Container d73c1f6fa0c815e3217f0f9bfb3aade7a6ea37e5b249c5e9dd2a51afdc1170b9 is not running
	
	** /stderr **
	W0601 10:55:37.402576  183213 kic.go:441] couldn't force stop kubelet. will continue with stop anyways: sudo service kubelet stop: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Container d73c1f6fa0c815e3217f0f9bfb3aade7a6ea37e5b249c5e9dd2a51afdc1170b9 is not running
	I0601 10:55:37.402594  183213 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system kubernetes-dashboard storage-gluster istio-operator]}
	I0601 10:55:37.402658  183213 kic_runner.go:93] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=storage-gluster; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I0601 10:55:37.402673  183213 kic_runner.go:114] Args: [docker exec --privileged auto-20220601104837-6708 sudo -s eval crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=storage-gluster; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator]
	I0601 10:55:37.431325  183213 kic.go:452] unable list containers : crictl list: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=storage-gluster; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator": exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Container d73c1f6fa0c815e3217f0f9bfb3aade7a6ea37e5b249c5e9dd2a51afdc1170b9 is not running
	I0601 10:55:37.431351  183213 kic.go:462] successfully stopped kubernetes!
	I0601 10:55:37.431399  183213 kic_runner.go:93] Run: pgrep kube-apiserver
	I0601 10:55:37.431416  183213 kic_runner.go:114] Args: [docker exec --privileged auto-20220601104837-6708 pgrep kube-apiserver]
	I0601 10:55:37.489923  183213 cli_runner.go:164] Run: docker container inspect auto-20220601104837-6708 --format={{.State.Status}}
	I0601 10:55:40.519499  183213 cli_runner.go:164] Run: docker container inspect auto-20220601104837-6708 --format={{.State.Status}}
	I0601 10:55:43.552337  183213 cli_runner.go:164] Run: docker container inspect auto-20220601104837-6708 --format={{.State.Status}}
	I0601 10:55:46.584009  183213 cli_runner.go:164] Run: docker container inspect auto-20220601104837-6708 --format={{.State.Status}}
	I0601 10:55:49.615464  183213 cli_runner.go:164] Run: docker container inspect auto-20220601104837-6708 --format={{.State.Status}}
	I0601 10:55:52.645973  183213 cli_runner.go:164] Run: docker container inspect auto-20220601104837-6708 --format={{.State.Status}}
	I0601 10:55:55.675424  183213 cli_runner.go:164] Run: docker container inspect auto-20220601104837-6708 --format={{.State.Status}}
	I0601 10:55:58.707716  183213 cli_runner.go:164] Run: docker container inspect auto-20220601104837-6708 --format={{.State.Status}}
	I0601 10:56:01.738603  183213 cli_runner.go:164] Run: docker container inspect auto-20220601104837-6708 --format={{.State.Status}}
	I0601 10:56:04.774297  183213 cli_runner.go:164] Run: docker container inspect auto-20220601104837-6708 --format={{.State.Status}}
	I0601 10:56:07.805887  183213 cli_runner.go:164] Run: docker container inspect auto-20220601104837-6708 --format={{.State.Status}}
	I0601 10:56:10.837534  183213 cli_runner.go:164] Run: docker container inspect auto-20220601104837-6708 --format={{.State.Status}}
	I0601 10:56:13.869384  183213 cli_runner.go:164] Run: docker container inspect auto-20220601104837-6708 --format={{.State.Status}}
	I0601 10:56:16.900007  183213 cli_runner.go:164] Run: docker container inspect auto-20220601104837-6708 --format={{.State.Status}}
	I0601 10:56:19.931277  183213 cli_runner.go:164] Run: docker container inspect auto-20220601104837-6708 --format={{.State.Status}}
	I0601 10:56:22.964128  183213 cli_runner.go:164] Run: docker container inspect auto-20220601104837-6708 --format={{.State.Status}}
	I0601 10:56:25.998350  183213 cli_runner.go:164] Run: docker container inspect auto-20220601104837-6708 --format={{.State.Status}}
	I0601 10:56:29.032007  183213 cli_runner.go:164] Run: docker container inspect auto-20220601104837-6708 --format={{.State.Status}}
	I0601 10:56:32.068364  183213 cli_runner.go:164] Run: docker container inspect auto-20220601104837-6708 --format={{.State.Status}}
	I0601 10:56:35.118695  183213 cli_runner.go:164] Run: docker container inspect auto-20220601104837-6708 --format={{.State.Status}}
	I0601 10:56:38.159109  183213 cli_runner.go:164] Run: docker container inspect auto-20220601104837-6708 --format={{.State.Status}}
	I0601 10:56:41.202226  183213 cli_runner.go:164] Run: docker container inspect auto-20220601104837-6708 --format={{.State.Status}}
	I0601 10:56:44.233825  183213 cli_runner.go:164] Run: docker container inspect auto-20220601104837-6708 --format={{.State.Status}}
	I0601 10:56:47.267991  183213 cli_runner.go:164] Run: docker container inspect auto-20220601104837-6708 --format={{.State.Status}}
	I0601 10:56:50.304921  183213 cli_runner.go:164] Run: docker container inspect auto-20220601104837-6708 --format={{.State.Status}}
	I0601 10:56:53.338254  183213 cli_runner.go:164] Run: docker container inspect auto-20220601104837-6708 --format={{.State.Status}}
	I0601 10:56:56.368266  183213 cli_runner.go:164] Run: docker container inspect auto-20220601104837-6708 --format={{.State.Status}}
	I0601 10:56:59.408005  183213 cli_runner.go:164] Run: docker container inspect auto-20220601104837-6708 --format={{.State.Status}}
	I0601 10:57:02.440121  183213 cli_runner.go:164] Run: docker container inspect auto-20220601104837-6708 --format={{.State.Status}}
	I0601 10:57:05.471250  183213 cli_runner.go:164] Run: docker container inspect auto-20220601104837-6708 --format={{.State.Status}}
	I0601 10:57:08.503996  183213 cli_runner.go:164] Run: docker container inspect auto-20220601104837-6708 --format={{.State.Status}}
	I0601 10:57:11.536013  183213 cli_runner.go:164] Run: docker container inspect auto-20220601104837-6708 --format={{.State.Status}}
	I0601 10:57:14.573916  183213 cli_runner.go:164] Run: docker container inspect auto-20220601104837-6708 --format={{.State.Status}}
	I0601 10:57:17.611972  183213 cli_runner.go:164] Run: docker container inspect auto-20220601104837-6708 --format={{.State.Status}}
	I0601 10:57:20.646297  183213 cli_runner.go:164] Run: docker container inspect auto-20220601104837-6708 --format={{.State.Status}}
	I0601 10:57:23.714056  183213 cli_runner.go:164] Run: docker container inspect auto-20220601104837-6708 --format={{.State.Status}}
	I0601 10:57:26.745839  183213 cli_runner.go:164] Run: docker container inspect auto-20220601104837-6708 --format={{.State.Status}}
	I0601 10:57:29.791530  183213 cli_runner.go:164] Run: docker container inspect auto-20220601104837-6708 --format={{.State.Status}}
	I0601 10:57:32.828150  183213 cli_runner.go:164] Run: docker container inspect auto-20220601104837-6708 --format={{.State.Status}}
	I0601 10:57:35.861802  183213 cli_runner.go:164] Run: docker container inspect auto-20220601104837-6708 --format={{.State.Status}}
	I0601 10:57:38.899969  183213 cli_runner.go:164] Run: docker container inspect auto-20220601104837-6708 --format={{.State.Status}}
	I0601 10:57:41.933815  183213 cli_runner.go:164] Run: docker container inspect auto-20220601104837-6708 --format={{.State.Status}}
	I0601 10:57:45.005587  183213 cli_runner.go:164] Run: docker container inspect auto-20220601104837-6708 --format={{.State.Status}}
	I0601 10:57:48.047984  183213 cli_runner.go:164] Run: docker container inspect auto-20220601104837-6708 --format={{.State.Status}}
	I0601 10:57:51.079617  183213 cli_runner.go:164] Run: docker container inspect auto-20220601104837-6708 --format={{.State.Status}}
	I0601 10:57:54.117089  183213 cli_runner.go:164] Run: docker container inspect auto-20220601104837-6708 --format={{.State.Status}}
	I0601 10:57:57.151642  183213 cli_runner.go:164] Run: docker container inspect auto-20220601104837-6708 --format={{.State.Status}}
	I0601 10:58:00.181867  183213 cli_runner.go:164] Run: docker container inspect auto-20220601104837-6708 --format={{.State.Status}}
	I0601 10:58:03.213069  183213 cli_runner.go:164] Run: docker container inspect auto-20220601104837-6708 --format={{.State.Status}}
	I0601 10:58:06.247384  183213 cli_runner.go:164] Run: docker container inspect auto-20220601104837-6708 --format={{.State.Status}}
	I0601 10:58:09.280088  183213 cli_runner.go:164] Run: docker container inspect auto-20220601104837-6708 --format={{.State.Status}}
	I0601 10:58:12.314329  183213 cli_runner.go:164] Run: docker container inspect auto-20220601104837-6708 --format={{.State.Status}}
	I0601 10:58:15.350762  183213 cli_runner.go:164] Run: docker container inspect auto-20220601104837-6708 --format={{.State.Status}}
	I0601 10:58:18.381776  183213 cli_runner.go:164] Run: docker container inspect auto-20220601104837-6708 --format={{.State.Status}}
	I0601 10:58:21.412038  183213 cli_runner.go:164] Run: docker container inspect auto-20220601104837-6708 --format={{.State.Status}}
	I0601 10:58:24.450884  183213 cli_runner.go:164] Run: docker container inspect auto-20220601104837-6708 --format={{.State.Status}}
	I0601 10:58:27.487996  183213 cli_runner.go:164] Run: docker container inspect auto-20220601104837-6708 --format={{.State.Status}}
	I0601 10:58:30.519049  183213 cli_runner.go:164] Run: docker container inspect auto-20220601104837-6708 --format={{.State.Status}}
	I0601 10:58:33.551970  183213 cli_runner.go:164] Run: docker container inspect auto-20220601104837-6708 --format={{.State.Status}}
	I0601 10:58:36.615795  183213 cli_runner.go:164] Run: docker container inspect auto-20220601104837-6708 --format={{.State.Status}}
	I0601 10:58:39.657408  183213 stop.go:59] stop err: Maximum number of retries (60) exceeded
	I0601 10:58:39.736182  183213 delete.go:129] stophost failed (probably ok): Temporary Error: stop: Maximum number of retries (60) exceeded
	I0601 10:58:39.736602  183213 cli_runner.go:164] Run: docker container inspect auto-20220601104837-6708 --format={{.State.Status}}
	W0601 10:58:39.766603  183213 delete.go:135] deletehost failed: Docker machine "auto-20220601104837-6708" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0601 10:58:39.766686  183213 cli_runner.go:164] Run: docker container inspect -f {{.Id}} auto-20220601104837-6708
	I0601 10:58:39.796594  183213 cli_runner.go:164] Run: docker container inspect auto-20220601104837-6708 --format={{.State.Status}}
	I0601 10:58:39.826136  183213 cli_runner.go:164] Run: docker exec --privileged -t auto-20220601104837-6708 /bin/bash -c "sudo init 0"
	W0601 10:58:39.857267  183213 cli_runner.go:211] docker exec --privileged -t auto-20220601104837-6708 /bin/bash -c "sudo init 0" returned with exit code 1
	I0601 10:58:39.857299  183213 oci.go:625] error shutdown auto-20220601104837-6708: docker exec --privileged -t auto-20220601104837-6708 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Container d73c1f6fa0c815e3217f0f9bfb3aade7a6ea37e5b249c5e9dd2a51afdc1170b9 is not running
	I0601 10:58:40.857458  183213 cli_runner.go:164] Run: docker container inspect auto-20220601104837-6708 --format={{.State.Status}}
	I0601 10:58:40.891973  183213 oci.go:639] temporary error: container auto-20220601104837-6708 status is  but expect it to be exited
	I0601 10:58:40.892007  183213 oci.go:645] Successfully shutdown container auto-20220601104837-6708
	I0601 10:58:40.892048  183213 cli_runner.go:164] Run: docker rm -f -v auto-20220601104837-6708
	I0601 10:58:40.942422  183213 cli_runner.go:164] Run: docker container inspect -f {{.Id}} auto-20220601104837-6708
	W0601 10:58:40.973939  183213 cli_runner.go:211] docker container inspect -f {{.Id}} auto-20220601104837-6708 returned with exit code 1
	I0601 10:58:40.974030  183213 cli_runner.go:164] Run: docker network inspect auto-20220601104837-6708 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0601 10:58:41.008210  183213 cli_runner.go:211] docker network inspect auto-20220601104837-6708 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0601 10:58:41.008294  183213 network_create.go:272] running [docker network inspect auto-20220601104837-6708] to gather additional debugging logs...
	I0601 10:58:41.008313  183213 cli_runner.go:164] Run: docker network inspect auto-20220601104837-6708
	W0601 10:58:41.038627  183213 cli_runner.go:211] docker network inspect auto-20220601104837-6708 returned with exit code 1
	I0601 10:58:41.038660  183213 network_create.go:275] error running [docker network inspect auto-20220601104837-6708]: docker network inspect auto-20220601104837-6708: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: auto-20220601104837-6708
	I0601 10:58:41.038680  183213 network_create.go:277] output of [docker network inspect auto-20220601104837-6708]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: auto-20220601104837-6708
	
	** /stderr **
	W0601 10:58:41.038830  183213 delete.go:139] delete failed (probably ok) <nil>
	I0601 10:58:41.038846  183213 fix.go:115] Sleeping 1 second for extra luck!
	I0601 10:58:42.038930  183213 start.go:131] createHost starting for "" (driver="docker")
	I0601 10:58:42.041386  183213 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0601 10:58:42.041507  183213 start.go:165] libmachine.API.Create for "auto-20220601104837-6708" (driver="docker")
	I0601 10:58:42.041540  183213 client.go:168] LocalClient.Create starting
	I0601 10:58:42.041613  183213 main.go:134] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca.pem
	I0601 10:58:42.041645  183213 main.go:134] libmachine: Decoding PEM data...
	I0601 10:58:42.041660  183213 main.go:134] libmachine: Parsing certificate...
	I0601 10:58:42.041719  183213 main.go:134] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/cert.pem
	I0601 10:58:42.041741  183213 main.go:134] libmachine: Decoding PEM data...
	I0601 10:58:42.041760  183213 main.go:134] libmachine: Parsing certificate...
	I0601 10:58:42.041991  183213 cli_runner.go:164] Run: docker network inspect auto-20220601104837-6708 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0601 10:58:42.077330  183213 cli_runner.go:211] docker network inspect auto-20220601104837-6708 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0601 10:58:42.077384  183213 network_create.go:272] running [docker network inspect auto-20220601104837-6708] to gather additional debugging logs...
	I0601 10:58:42.077402  183213 cli_runner.go:164] Run: docker network inspect auto-20220601104837-6708
	W0601 10:58:42.107012  183213 cli_runner.go:211] docker network inspect auto-20220601104837-6708 returned with exit code 1
	I0601 10:58:42.107047  183213 network_create.go:275] error running [docker network inspect auto-20220601104837-6708]: docker network inspect auto-20220601104837-6708: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: auto-20220601104837-6708
	I0601 10:58:42.107070  183213 network_create.go:277] output of [docker network inspect auto-20220601104837-6708]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: auto-20220601104837-6708
	
	** /stderr **
	I0601 10:58:42.107116  183213 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0601 10:58:42.136955  183213 network.go:240] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName:br-ab9449f0ea0c IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:97:ae:4d:57}}
	I0601 10:58:42.137489  183213 network.go:240] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName:br-fc09ea173bbb IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:02:42:b9:4b:c1:0f}}
	I0601 10:58:42.138044  183213 network.go:240] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName:br-787fac1877c0 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:02:42:00:92:62:57}}
	I0601 10:58:42.138677  183213 network.go:288] reserving subnet 192.168.76.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.67.0:0xc000798968 192.168.76.0:0xc0005ae1b8] misses:0}
	I0601 10:58:42.138708  183213 network.go:235] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0601 10:58:42.138721  183213 network_create.go:115] attempt to create docker network auto-20220601104837-6708 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I0601 10:58:42.138764  183213 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true auto-20220601104837-6708
	I0601 10:58:42.204322  183213 network_create.go:99] docker network auto-20220601104837-6708 192.168.76.0/24 created
	I0601 10:58:42.204354  183213 kic.go:106] calculated static IP "192.168.76.2" for the "auto-20220601104837-6708" container
	I0601 10:58:42.204413  183213 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0601 10:58:42.236402  183213 cli_runner.go:164] Run: docker volume create auto-20220601104837-6708 --label name.minikube.sigs.k8s.io=auto-20220601104837-6708 --label created_by.minikube.sigs.k8s.io=true
	I0601 10:58:42.265145  183213 oci.go:103] Successfully created a docker volume auto-20220601104837-6708
	I0601 10:58:42.265224  183213 cli_runner.go:164] Run: docker run --rm --name auto-20220601104837-6708-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-20220601104837-6708 --entrypoint /usr/bin/test -v auto-20220601104837-6708:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a -d /var/lib
	I0601 10:58:42.709941  183213 oci.go:107] Successfully prepared a docker volume auto-20220601104837-6708
	I0601 10:58:42.709980  183213 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime containerd
	I0601 10:58:42.709997  183213 kic.go:179] Starting extracting preloaded images to volume ...
	I0601 10:58:42.710049  183213 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.6-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v auto-20220601104837-6708:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a -I lz4 -xf /preloaded.tar -C /extractDir
	I0601 10:58:49.229227  183213 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.6-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v auto-20220601104837-6708:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a -I lz4 -xf /preloaded.tar -C /extractDir: (6.519116235s)
	I0601 10:58:49.229267  183213 kic.go:188] duration metric: took 6.519265 seconds to extract preloaded images to volume
	W0601 10:58:49.229440  183213 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0601 10:58:49.229556  183213 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0601 10:58:49.363397  183213 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname auto-20220601104837-6708 --name auto-20220601104837-6708 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-20220601104837-6708 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=auto-20220601104837-6708 --network auto-20220601104837-6708 --ip 192.168.76.2 --volume auto-20220601104837-6708:/var --security-opt apparmor=unconfined --memory=2048mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a
	I0601 10:58:50.241655  183213 cli_runner.go:164] Run: docker container inspect auto-20220601104837-6708 --format={{.State.Running}}
	I0601 10:58:50.280206  183213 cli_runner.go:164] Run: docker container inspect auto-20220601104837-6708 --format={{.State.Status}}
	I0601 10:58:50.317784  183213 cli_runner.go:164] Run: docker exec auto-20220601104837-6708 stat /var/lib/dpkg/alternatives/iptables
	I0601 10:58:50.398965  183213 oci.go:247] the created container "auto-20220601104837-6708" has a running status.
	I0601 10:58:50.399012  183213 kic.go:210] Creating ssh key for kic: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/auto-20220601104837-6708/id_rsa...
	I0601 10:58:50.709819  183213 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/auto-20220601104837-6708/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0601 10:58:50.813926  183213 cli_runner.go:164] Run: docker container inspect auto-20220601104837-6708 --format={{.State.Status}}
	I0601 10:58:50.854402  183213 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0601 10:58:50.854429  183213 kic_runner.go:114] Args: [docker exec --privileged auto-20220601104837-6708 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0601 10:58:50.941689  183213 cli_runner.go:164] Run: docker container inspect auto-20220601104837-6708 --format={{.State.Status}}
	I0601 10:58:50.976403  183213 machine.go:88] provisioning docker machine ...
	I0601 10:58:50.976445  183213 ubuntu.go:169] provisioning hostname "auto-20220601104837-6708"
	I0601 10:58:50.976495  183213 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220601104837-6708
	I0601 10:58:51.013445  183213 main.go:134] libmachine: Using SSH client type: native
	I0601 10:58:51.013663  183213 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7da240] 0x7dd2a0 <nil>  [] 0s} 127.0.0.1 49392 <nil> <nil>}
	I0601 10:58:51.013681  183213 main.go:134] libmachine: About to run SSH command:
	sudo hostname auto-20220601104837-6708 && echo "auto-20220601104837-6708" | sudo tee /etc/hostname
	I0601 10:58:51.144862  183213 main.go:134] libmachine: SSH cmd err, output: <nil>: auto-20220601104837-6708
	
	I0601 10:58:51.144952  183213 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220601104837-6708
	I0601 10:58:51.179165  183213 main.go:134] libmachine: Using SSH client type: native
	I0601 10:58:51.179345  183213 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7da240] 0x7dd2a0 <nil>  [] 0s} 127.0.0.1 49392 <nil> <nil>}
	I0601 10:58:51.179369  183213 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sauto-20220601104837-6708' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 auto-20220601104837-6708/g' /etc/hosts;
				else 
					echo '127.0.1.1 auto-20220601104837-6708' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0601 10:58:51.305074  183213 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0601 10:58:51.305112  183213 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/c
erts/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube}
	I0601 10:58:51.305142  183213 ubuntu.go:177] setting up certificates
	I0601 10:58:51.305154  183213 provision.go:83] configureAuth start
	I0601 10:58:51.305215  183213 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-20220601104837-6708
	I0601 10:58:51.342859  183213 provision.go:138] copyHostCerts
	I0601 10:58:51.342916  183213 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.pem, removing ...
	I0601 10:58:51.342926  183213 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.pem
	I0601 10:58:51.342981  183213 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.pem (1082 bytes)
	I0601 10:58:51.343054  183213 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cert.pem, removing ...
	I0601 10:58:51.343065  183213 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cert.pem
	I0601 10:58:51.343087  183213 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cert.pem (1123 bytes)
	I0601 10:58:51.343134  183213 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/key.pem, removing ...
	I0601 10:58:51.343142  183213 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/key.pem
	I0601 10:58:51.343161  183213 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/key.pem (1679 bytes)
	I0601 10:58:51.343202  183213 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca-key.pem org=jenkins.auto-20220601104837-6708 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube auto-20220601104837-6708]
	I0601 10:58:51.670896  183213 provision.go:172] copyRemoteCerts
	I0601 10:58:51.670951  183213 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0601 10:58:51.670998  183213 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220601104837-6708
	I0601 10:58:51.704526  183213 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49392 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/auto-20220601104837-6708/id_rsa Username:docker}
	I0601 10:58:51.792643  183213 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/server.pem --> /etc/docker/server.pem (1245 bytes)
	I0601 10:58:51.817375  183213 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0601 10:58:51.837381  183213 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0601 10:58:51.855385  183213 provision.go:86] duration metric: configureAuth took 550.212949ms
	I0601 10:58:51.855409  183213 ubuntu.go:193] setting minikube options for container-runtime
	I0601 10:58:51.855587  183213 config.go:178] Loaded profile config "auto-20220601104837-6708": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.6
	I0601 10:58:51.855600  183213 machine.go:91] provisioned docker machine in 879.171604ms
	I0601 10:58:51.855605  183213 client.go:171] LocalClient.Create took 9.814057501s
	I0601 10:58:51.855627  183213 start.go:173] duration metric: libmachine.API.Create for "auto-20220601104837-6708" took 9.814118109s
	I0601 10:58:51.855641  183213 start.go:306] post-start starting for "auto-20220601104837-6708" (driver="docker")
	I0601 10:58:51.855647  183213 start.go:316] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0601 10:58:51.855692  183213 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0601 10:58:51.855732  183213 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220601104837-6708
	I0601 10:58:51.889954  183213 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49392 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/auto-20220601104837-6708/id_rsa Username:docker}
	I0601 10:58:51.994225  183213 ssh_runner.go:195] Run: cat /etc/os-release
	I0601 10:58:51.997717  183213 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0601 10:58:51.997742  183213 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0601 10:58:51.997751  183213 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0601 10:58:51.997756  183213 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0601 10:58:51.997765  183213 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/addons for local assets ...
	I0601 10:58:51.997824  183213 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files for local assets ...
	I0601 10:58:51.997911  183213 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files/etc/ssl/certs/67082.pem -> 67082.pem in /etc/ssl/certs
	I0601 10:58:51.998020  183213 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0601 10:58:52.016815  183213 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files/etc/ssl/certs/67082.pem --> /etc/ssl/certs/67082.pem (1708 bytes)
	I0601 10:58:52.037743  183213 start.go:309] post-start completed in 182.089048ms
	I0601 10:58:52.038118  183213 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-20220601104837-6708
	I0601 10:58:52.079352  183213 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/auto-20220601104837-6708/config.json ...
	I0601 10:58:52.079602  183213 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0601 10:58:52.079661  183213 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220601104837-6708
	I0601 10:58:52.124927  183213 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49392 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/auto-20220601104837-6708/id_rsa Username:docker}
	I0601 10:58:52.208348  183213 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0601 10:58:52.212241  183213 start.go:134] duration metric: createHost completed in 10.173275561s
	I0601 10:58:52.212329  183213 cli_runner.go:164] Run: docker container inspect auto-20220601104837-6708 --format={{.State.Status}}
	W0601 10:58:52.256016  183213 fix.go:129] unexpected machine state, will restart: <nil>
	I0601 10:58:52.256052  183213 machine.go:88] provisioning docker machine ...
	I0601 10:58:52.256074  183213 ubuntu.go:169] provisioning hostname "auto-20220601104837-6708"
	I0601 10:58:52.256133  183213 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220601104837-6708
	I0601 10:58:52.291435  183213 main.go:134] libmachine: Using SSH client type: native
	I0601 10:58:52.291623  183213 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7da240] 0x7dd2a0 <nil>  [] 0s} 127.0.0.1 49392 <nil> <nil>}
	I0601 10:58:52.291651  183213 main.go:134] libmachine: About to run SSH command:
	sudo hostname auto-20220601104837-6708 && echo "auto-20220601104837-6708" | sudo tee /etc/hostname
	I0601 10:58:52.430906  183213 main.go:134] libmachine: SSH cmd err, output: <nil>: auto-20220601104837-6708
	
	I0601 10:58:52.430988  183213 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220601104837-6708
	I0601 10:58:52.464529  183213 main.go:134] libmachine: Using SSH client type: native
	I0601 10:58:52.464713  183213 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7da240] 0x7dd2a0 <nil>  [] 0s} 127.0.0.1 49392 <nil> <nil>}
	I0601 10:58:52.464746  183213 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sauto-20220601104837-6708' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 auto-20220601104837-6708/g' /etc/hosts;
				else 
					echo '127.0.1.1 auto-20220601104837-6708' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0601 10:58:52.588166  183213 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0601 10:58:52.588244  183213 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/c
erts/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube}
	I0601 10:58:52.588290  183213 ubuntu.go:177] setting up certificates
	I0601 10:58:52.588300  183213 provision.go:83] configureAuth start
	I0601 10:58:52.588350  183213 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-20220601104837-6708
	I0601 10:58:52.631660  183213 provision.go:138] copyHostCerts
	I0601 10:58:52.631720  183213 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.pem, removing ...
	I0601 10:58:52.631728  183213 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.pem
	I0601 10:58:52.631791  183213 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.pem (1082 bytes)
	I0601 10:58:52.631935  183213 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cert.pem, removing ...
	I0601 10:58:52.631946  183213 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cert.pem
	I0601 10:58:52.631982  183213 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cert.pem (1123 bytes)
	I0601 10:58:52.632058  183213 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/key.pem, removing ...
	I0601 10:58:52.632065  183213 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/key.pem
	I0601 10:58:52.632095  183213 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/key.pem (1679 bytes)
	I0601 10:58:52.632149  183213 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca-key.pem org=jenkins.auto-20220601104837-6708 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube auto-20220601104837-6708]
	I0601 10:58:52.918422  183213 provision.go:172] copyRemoteCerts
	I0601 10:58:52.918479  183213 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0601 10:58:52.918510  183213 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220601104837-6708
	I0601 10:58:52.951469  183213 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49392 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/auto-20220601104837-6708/id_rsa Username:docker}
	I0601 10:58:53.043121  183213 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0601 10:58:53.062485  183213 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/server.pem --> /etc/docker/server.pem (1245 bytes)
	I0601 10:58:53.112686  183213 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0601 10:58:53.134032  183213 provision.go:86] duration metric: configureAuth took 545.715731ms
	I0601 10:58:53.134071  183213 ubuntu.go:193] setting minikube options for container-runtime
	I0601 10:58:53.134304  183213 config.go:178] Loaded profile config "auto-20220601104837-6708": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.6
	I0601 10:58:53.134330  183213 machine.go:91] provisioned docker machine in 878.269789ms
	I0601 10:58:53.134340  183213 start.go:306] post-start starting for "auto-20220601104837-6708" (driver="docker")
	I0601 10:58:53.134348  183213 start.go:316] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0601 10:58:53.134414  183213 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0601 10:58:53.134465  183213 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220601104837-6708
	I0601 10:58:53.181764  183213 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49392 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/auto-20220601104837-6708/id_rsa Username:docker}
	I0601 10:58:53.282610  183213 ssh_runner.go:195] Run: cat /etc/os-release
	I0601 10:58:53.287320  183213 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0601 10:58:53.287361  183213 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0601 10:58:53.287375  183213 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0601 10:58:53.287381  183213 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0601 10:58:53.287393  183213 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/addons for local assets ...
	I0601 10:58:53.287441  183213 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files for local assets ...
	I0601 10:58:53.287523  183213 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files/etc/ssl/certs/67082.pem -> 67082.pem in /etc/ssl/certs
	I0601 10:58:53.287632  183213 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0601 10:58:53.297927  183213 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files/etc/ssl/certs/67082.pem --> /etc/ssl/certs/67082.pem (1708 bytes)
	I0601 10:58:53.318769  183213 start.go:309] post-start completed in 184.4139ms
	I0601 10:58:53.318856  183213 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0601 10:58:53.318901  183213 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220601104837-6708
	I0601 10:58:53.362405  183213 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49392 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/auto-20220601104837-6708/id_rsa Username:docker}
	I0601 10:58:53.452827  183213 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0601 10:58:53.458038  183213 fix.go:57] fixHost completed within 3m16.238530274s
	I0601 10:58:53.458065  183213 start.go:81] releasing machines lock for "auto-20220601104837-6708", held for 3m16.238576568s
	I0601 10:58:53.458162  183213 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-20220601104837-6708
	I0601 10:58:53.501993  183213 ssh_runner.go:195] Run: sudo service crio stop
	I0601 10:58:53.502059  183213 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220601104837-6708
	I0601 10:58:53.502210  183213 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0601 10:58:53.502271  183213 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220601104837-6708
	I0601 10:58:53.549066  183213 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49392 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/auto-20220601104837-6708/id_rsa Username:docker}
	I0601 10:58:53.549075  183213 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49392 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/auto-20220601104837-6708/id_rsa Username:docker}
	I0601 10:58:54.141833  183213 openrc.go:165] stop output: 
	I0601 10:58:54.141892  183213 ssh_runner.go:195] Run: sudo service crio status
	I0601 10:58:54.167122  183213 docker.go:187] disabling docker service ...
	I0601 10:58:54.167180  183213 ssh_runner.go:195] Run: sudo service docker.socket stop
	I0601 10:58:54.688872  183213 openrc.go:165] stop output: 
	** stderr ** 
	Failed to stop docker.socket.service: Unit docker.socket.service not loaded.
	
	** /stderr **
	E0601 10:58:54.688911  183213 docker.go:190] "Failed to stop" err=<
		sudo service docker.socket stop: Process exited with status 5
		stdout:
		
		stderr:
		Failed to stop docker.socket.service: Unit docker.socket.service not loaded.
	 > service="docker.socket"
	I0601 10:58:54.688954  183213 ssh_runner.go:195] Run: sudo service docker.service stop
	I0601 10:58:55.185435  183213 openrc.go:165] stop output: 
	** stderr ** 
	Failed to stop docker.service.service: Unit docker.service.service not loaded.
	
	** /stderr **
	E0601 10:58:55.185463  183213 docker.go:193] "Failed to stop" err=<
		sudo service docker.service stop: Process exited with status 5
		stdout:
		
		stderr:
		Failed to stop docker.service.service: Unit docker.service.service not loaded.
	 > service="docker.service"
	W0601 10:58:55.185478  183213 cruntime.go:284] disable failed: sudo service docker.service stop: Process exited with status 5
	stdout:
	
	stderr:
	Failed to stop docker.service.service: Unit docker.service.service not loaded.
	I0601 10:58:55.185515  183213 ssh_runner.go:195] Run: sudo service docker status
	W0601 10:58:55.208937  183213 containerd.go:185] disableOthers: Docker is still active
	I0601 10:58:55.209088  183213 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0601 10:58:55.225063  183213 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*sandbox_image = .*$|sandbox_image = "k8s.gcr.io/pause:3.6"|' -i /etc/containerd/config.toml"
	I0601 10:58:55.233943  183213 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*restrict_oom_score_adj = .*$|restrict_oom_score_adj = false|' -i /etc/containerd/config.toml"
	I0601 10:58:55.265470  183213 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*SystemdCgroup = .*$|SystemdCgroup = false|' -i /etc/containerd/config.toml"
	I0601 10:58:55.383103  183213 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*conf_dir = .*$|conf_dir = "/etc/cni/net.mk"|' -i /etc/containerd/config.toml"
	I0601 10:58:55.451945  183213 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^# imports|imports = ["/etc/containerd/containerd.conf.d/02-containerd.conf"]|' -i /etc/containerd/config.toml"
	I0601 10:58:55.510182  183213 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc/containerd/containerd.conf.d && printf %s "dmVyc2lvbiA9IDIK" | base64 -d | sudo tee /etc/containerd/containerd.conf.d/02-containerd.conf"
	I0601 10:58:55.558957  183213 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0601 10:58:55.565496  183213 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0601 10:58:55.571960  183213 ssh_runner.go:195] Run: sudo service containerd restart
	I0601 10:58:55.664015  183213 openrc.go:152] restart output: 
	I0601 10:58:55.664082  183213 start.go:447] Will wait 60s for socket path /run/containerd/containerd.sock
	I0601 10:58:55.664138  183213 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0601 10:58:55.668838  183213 start.go:468] Will wait 60s for crictl version
	I0601 10:58:55.668897  183213 ssh_runner.go:195] Run: sudo crictl version
	I0601 10:58:55.779914  183213 start.go:477] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.6.4
	RuntimeApiVersion:  v1alpha2
	I0601 10:58:55.779976  183213 ssh_runner.go:195] Run: containerd --version
	I0601 10:58:55.809849  183213 ssh_runner.go:195] Run: containerd --version
	I0601 10:58:55.980954  183213 out.go:177] * Preparing Kubernetes v1.23.6 on containerd 1.6.4 ...
	I0601 10:58:56.045997  183213 cli_runner.go:164] Run: docker network inspect auto-20220601104837-6708 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0601 10:58:56.083562  183213 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I0601 10:58:56.086907  183213 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0601 10:58:56.344842  183213 out.go:177]   - kubelet.cni-conf-dir=/etc/cni/net.mk
	I0601 10:58:56.528117  183213 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime containerd
	I0601 10:58:56.528221  183213 ssh_runner.go:195] Run: sudo crictl images --output json
	I0601 10:58:56.558594  183213 containerd.go:547] all images are preloaded for containerd runtime.
	I0601 10:58:56.558615  183213 containerd.go:461] Images already preloaded, skipping extraction
	I0601 10:58:56.558654  183213 ssh_runner.go:195] Run: sudo crictl images --output json
	I0601 10:58:56.582906  183213 containerd.go:547] all images are preloaded for containerd runtime.
	I0601 10:58:56.582934  183213 cache_images.go:84] Images are preloaded, skipping loading
	I0601 10:58:56.582988  183213 ssh_runner.go:195] Run: sudo crictl info
	I0601 10:58:56.606225  183213 cni.go:95] Creating CNI manager for ""
	I0601 10:58:56.606247  183213 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0601 10:58:56.606263  183213 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0601 10:58:56.606275  183213 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.23.6 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:auto-20220601104837-6708 NodeName:auto-20220601104837-6708 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib
/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0601 10:58:56.606385  183213 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "auto-20220601104837-6708"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.23.6
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0601 10:58:56.606455  183213 kubeadm.go:961] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.23.6/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cni-conf-dir=/etc/cni/net.mk --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=auto-20220601104837-6708 --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.76.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.23.6 ClusterName:auto-20220601104837-6708 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0601 10:58:56.606496  183213 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.23.6
	I0601 10:58:56.613406  183213 binaries.go:44] Found k8s binaries, skipping transfer
	I0601 10:58:56.613540  183213 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /var/lib/minikube /etc/init.d
	I0601 10:58:56.622522  183213 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (569 bytes)
	I0601 10:58:56.636194  183213 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0601 10:58:56.648234  183213 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2053 bytes)
	I0601 10:58:56.659986  183213 ssh_runner.go:362] scp memory --> /var/lib/minikube/openrc-restart-wrapper.sh (233 bytes)
	I0601 10:58:56.671604  183213 ssh_runner.go:362] scp memory --> /etc/init.d/kubelet (839 bytes)
	I0601 10:58:56.683394  183213 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0601 10:58:56.686078  183213 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0601 10:58:56.716972  183213 certs.go:54] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/auto-20220601104837-6708 for IP: 192.168.76.2
	I0601 10:58:56.717085  183213 certs.go:182] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.key
	I0601 10:58:56.717136  183213 certs.go:182] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/proxy-client-ca.key
	I0601 10:58:56.717198  183213 certs.go:302] generating minikube-user signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/auto-20220601104837-6708/client.key
	I0601 10:58:56.717215  183213 crypto.go:68] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/auto-20220601104837-6708/client.crt with IP's: []
	I0601 10:58:56.831463  183213 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/auto-20220601104837-6708/client.crt ...
	I0601 10:58:56.831489  183213 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/auto-20220601104837-6708/client.crt: {Name:mkd495c5dd00269d9829f57810a14d3a821640f8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0601 10:58:56.831658  183213 crypto.go:164] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/auto-20220601104837-6708/client.key ...
	I0601 10:58:56.831671  183213 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/auto-20220601104837-6708/client.key: {Name:mk85e70771181a978ac40e2ad821166bc1fedf11 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0601 10:58:56.831764  183213 certs.go:302] generating minikube signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/auto-20220601104837-6708/apiserver.key.31bdca25
	I0601 10:58:56.831786  183213 crypto.go:68] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/auto-20220601104837-6708/apiserver.crt.31bdca25 with IP's: [192.168.76.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0601 10:58:56.968595  183213 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/auto-20220601104837-6708/apiserver.crt.31bdca25 ...
	I0601 10:58:56.968624  183213 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/auto-20220601104837-6708/apiserver.crt.31bdca25: {Name:mk6fec8d7a3d260055810a996a5b9d9aec843333 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0601 10:58:56.968790  183213 crypto.go:164] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/auto-20220601104837-6708/apiserver.key.31bdca25 ...
	I0601 10:58:56.968804  183213 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/auto-20220601104837-6708/apiserver.key.31bdca25: {Name:mk228a21c644523bd36b3eadac999b7a99fa9913 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0601 10:58:56.968884  183213 certs.go:320] copying /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/auto-20220601104837-6708/apiserver.crt.31bdca25 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/auto-20220601104837-6708/apiserver.crt
	I0601 10:58:56.968946  183213 certs.go:324] copying /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/auto-20220601104837-6708/apiserver.key.31bdca25 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/auto-20220601104837-6708/apiserver.key
	I0601 10:58:56.968992  183213 certs.go:302] generating aggregator signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/auto-20220601104837-6708/proxy-client.key
	I0601 10:58:56.969006  183213 crypto.go:68] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/auto-20220601104837-6708/proxy-client.crt with IP's: []
	I0601 10:58:57.057394  183213 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/auto-20220601104837-6708/proxy-client.crt ...
	I0601 10:58:57.057433  183213 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/auto-20220601104837-6708/proxy-client.crt: {Name:mkd4ab255ae2542c68947a111716957596da71a7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0601 10:58:57.068615  183213 crypto.go:164] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/auto-20220601104837-6708/proxy-client.key ...
	I0601 10:58:57.068645  183213 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/auto-20220601104837-6708/proxy-client.key: {Name:mkd9ae691df2f155ea36511f094bc7b7d787d3a0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0601 10:58:57.068904  183213 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/6708.pem (1338 bytes)
	W0601 10:58:57.068963  183213 certs.go:384] ignoring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/6708_empty.pem, impossibly tiny 0 bytes
	I0601 10:58:57.068984  183213 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca-key.pem (1679 bytes)
	I0601 10:58:57.069016  183213 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca.pem (1082 bytes)
	I0601 10:58:57.069048  183213 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/cert.pem (1123 bytes)
	I0601 10:58:57.069118  183213 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/key.pem (1679 bytes)
	I0601 10:58:57.069181  183213 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files/etc/ssl/certs/67082.pem (1708 bytes)
	I0601 10:58:57.069947  183213 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/auto-20220601104837-6708/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0601 10:58:57.089972  183213 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/auto-20220601104837-6708/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0601 10:58:57.110228  183213 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/auto-20220601104837-6708/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0601 10:58:57.128354  183213 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/auto-20220601104837-6708/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0601 10:58:57.146374  183213 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0601 10:58:57.164930  183213 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0601 10:58:57.183609  183213 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0601 10:58:57.202378  183213 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0601 10:58:57.283342  183213 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/6708.pem --> /usr/share/ca-certificates/6708.pem (1338 bytes)
	I0601 10:58:57.393197  183213 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files/etc/ssl/certs/67082.pem --> /usr/share/ca-certificates/67082.pem (1708 bytes)
	I0601 10:58:57.513500  183213 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0601 10:58:57.532846  183213 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0601 10:58:57.545588  183213 ssh_runner.go:195] Run: openssl version
	I0601 10:58:57.550116  183213 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0601 10:58:57.557273  183213 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0601 10:58:57.560165  183213 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Jun  1 10:20 /usr/share/ca-certificates/minikubeCA.pem
	I0601 10:58:57.560210  183213 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0601 10:58:57.564886  183213 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0601 10:58:57.571960  183213 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6708.pem && ln -fs /usr/share/ca-certificates/6708.pem /etc/ssl/certs/6708.pem"
	I0601 10:58:57.578819  183213 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6708.pem
	I0601 10:58:57.581764  183213 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Jun  1 10:24 /usr/share/ca-certificates/6708.pem
	I0601 10:58:57.581816  183213 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6708.pem
	I0601 10:58:57.586288  183213 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6708.pem /etc/ssl/certs/51391683.0"
	I0601 10:58:57.593235  183213 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/67082.pem && ln -fs /usr/share/ca-certificates/67082.pem /etc/ssl/certs/67082.pem"
	I0601 10:58:57.600461  183213 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/67082.pem
	I0601 10:58:57.603363  183213 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Jun  1 10:24 /usr/share/ca-certificates/67082.pem
	I0601 10:58:57.603404  183213 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/67082.pem
	I0601 10:58:57.680775  183213 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/67082.pem /etc/ssl/certs/3ec20f2e.0"
	I0601 10:58:57.695641  183213 kubeadm.go:395] StartCluster: {Name:auto-20220601104837-6708 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:auto-20220601104837-6708 Namespace:default APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0601 10:58:57.695735  183213 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0601 10:58:57.695771  183213 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0601 10:58:57.720616  183213 cri.go:87] found id: ""
	I0601 10:58:57.720680  183213 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0601 10:58:57.727853  183213 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0601 10:58:57.735563  183213 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0601 10:58:57.735621  183213 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0601 10:58:57.742415  183213 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0601 10:58:57.742452  183213 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0601 10:58:58.113908  183213 out.go:204]   - Generating certificates and keys ...
	I0601 10:59:00.693048  183213 out.go:204]   - Booting up control plane ...
	I0601 10:59:08.737221  183213 out.go:204]   - Configuring RBAC rules ...
	I0601 10:59:09.166274  183213 cni.go:95] Creating CNI manager for ""
	I0601 10:59:09.166314  183213 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0601 10:59:09.168637  183213 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0601 10:59:09.170116  183213 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0601 10:59:09.174344  183213 cni.go:189] applying CNI manifest using /var/lib/minikube/binaries/v1.23.6/kubectl ...
	I0601 10:59:09.174363  183213 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0601 10:59:09.188674  183213 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0601 10:59:10.210194  183213 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.23.6/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.021482023s)
	I0601 10:59:10.210252  183213 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0601 10:59:10.210314  183213 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 10:59:10.210347  183213 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl label nodes minikube.k8s.io/version=v1.26.0-beta.1 minikube.k8s.io/commit=4a356b2b7b41c6be3e1e342298908c27bb98ce92 minikube.k8s.io/name=auto-20220601104837-6708 minikube.k8s.io/updated_at=2022_06_01T10_59_10_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 10:59:10.217883  183213 ops.go:34] apiserver oom_adj: -16
	I0601 10:59:10.286500  183213 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 10:59:10.859984  183213 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 10:59:11.360335  183213 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 10:59:11.860174  183213 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 10:59:12.359408  183213 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 10:59:12.859977  183213 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 10:59:13.360036  183213 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 10:59:13.859806  183213 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 10:59:14.359593  183213 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 10:59:14.859938  183213 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 10:59:15.360251  183213 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 10:59:15.860059  183213 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 10:59:16.360165  183213 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 10:59:16.859688  183213 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 10:59:17.360054  183213 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 10:59:17.859760  183213 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 10:59:18.359791  183213 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 10:59:18.859836  183213 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 10:59:19.360259  183213 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 10:59:19.860087  183213 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 10:59:20.359759  183213 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 10:59:20.859557  183213 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 10:59:21.359692  183213 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 10:59:21.860181  183213 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 10:59:21.962025  183213 kubeadm.go:1045] duration metric: took 11.75175035s to wait for elevateKubeSystemPrivileges.
	I0601 10:59:21.962057  183213 kubeadm.go:397] StartCluster complete in 24.266422938s
	I0601 10:59:21.962076  183213 settings.go:142] acquiring lock: {Name:mk20a847233fa50399ab0a24280bffb8d8dbd41b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0601 10:59:21.962179  183213 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig
	I0601 10:59:21.964139  183213 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig: {Name:mkb0e16236e54fbc8651999f1dd70854c53de7db Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0601 10:59:22.482393  183213 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "auto-20220601104837-6708" rescaled to 1
	I0601 10:59:22.482458  183213 start.go:208] Will wait 5m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0601 10:59:22.484115  183213 out.go:177] * Verifying Kubernetes components...
	I0601 10:59:22.482522  183213 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0601 10:59:22.482530  183213 addons.go:415] enableAddons start: toEnable=map[], additional=[]
	I0601 10:59:22.482732  183213 config.go:178] Loaded profile config "auto-20220601104837-6708": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.6
	I0601 10:59:22.485775  183213 ssh_runner.go:195] Run: sudo service kubelet status
	I0601 10:59:22.485825  183213 addons.go:65] Setting storage-provisioner=true in profile "auto-20220601104837-6708"
	I0601 10:59:22.485859  183213 addons.go:153] Setting addon storage-provisioner=true in "auto-20220601104837-6708"
	W0601 10:59:22.485872  183213 addons.go:165] addon storage-provisioner should already be in state true
	I0601 10:59:22.485918  183213 host.go:66] Checking if "auto-20220601104837-6708" exists ...
	I0601 10:59:22.485827  183213 addons.go:65] Setting default-storageclass=true in profile "auto-20220601104837-6708"
	I0601 10:59:22.485968  183213 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "auto-20220601104837-6708"
	I0601 10:59:22.486240  183213 cli_runner.go:164] Run: docker container inspect auto-20220601104837-6708 --format={{.State.Status}}
	I0601 10:59:22.486416  183213 cli_runner.go:164] Run: docker container inspect auto-20220601104837-6708 --format={{.State.Status}}
	I0601 10:59:22.535029  183213 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0601 10:59:22.536345  183213 addons.go:348] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0601 10:59:22.536364  183213 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0601 10:59:22.536403  183213 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220601104837-6708
	I0601 10:59:22.555706  183213 addons.go:153] Setting addon default-storageclass=true in "auto-20220601104837-6708"
	W0601 10:59:22.555732  183213 addons.go:165] addon default-storageclass should already be in state true
	I0601 10:59:22.555763  183213 host.go:66] Checking if "auto-20220601104837-6708" exists ...
	I0601 10:59:22.556285  183213 cli_runner.go:164] Run: docker container inspect auto-20220601104837-6708 --format={{.State.Status}}
	I0601 10:59:22.578102  183213 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49392 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/auto-20220601104837-6708/id_rsa Username:docker}
	I0601 10:59:22.600663  183213 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0601 10:59:22.602251  183213 node_ready.go:35] waiting up to 5m0s for node "auto-20220601104837-6708" to be "Ready" ...
	I0601 10:59:22.604350  183213 addons.go:348] installing /etc/kubernetes/addons/storageclass.yaml
	I0601 10:59:22.604369  183213 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0601 10:59:22.604409  183213 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220601104837-6708
	I0601 10:59:22.639264  183213 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49392 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/auto-20220601104837-6708/id_rsa Username:docker}
	I0601 10:59:22.784635  183213 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0601 10:59:22.877372  183213 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0601 10:59:23.092732  183213 start.go:806] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS
	I0601 10:59:23.290193  183213 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0601 10:59:23.291540  183213 addons.go:417] enableAddons completed in 809.018725ms
	I0601 10:59:24.608455  183213 node_ready.go:58] node "auto-20220601104837-6708" has status "Ready":"False"
	I0601 10:59:27.108520  183213 node_ready.go:58] node "auto-20220601104837-6708" has status "Ready":"False"
	I0601 10:59:29.108833  183213 node_ready.go:58] node "auto-20220601104837-6708" has status "Ready":"False"
	I0601 10:59:31.608969  183213 node_ready.go:58] node "auto-20220601104837-6708" has status "Ready":"False"
	I0601 10:59:34.108130  183213 node_ready.go:58] node "auto-20220601104837-6708" has status "Ready":"False"
	I0601 10:59:36.108559  183213 node_ready.go:58] node "auto-20220601104837-6708" has status "Ready":"False"
	I0601 10:59:38.108897  183213 node_ready.go:58] node "auto-20220601104837-6708" has status "Ready":"False"
	I0601 10:59:40.608446  183213 node_ready.go:58] node "auto-20220601104837-6708" has status "Ready":"False"
	I0601 10:59:42.608822  183213 node_ready.go:58] node "auto-20220601104837-6708" has status "Ready":"False"
	I0601 10:59:44.608956  183213 node_ready.go:58] node "auto-20220601104837-6708" has status "Ready":"False"
	I0601 10:59:46.610138  183213 node_ready.go:58] node "auto-20220601104837-6708" has status "Ready":"False"
	I0601 10:59:49.108671  183213 node_ready.go:58] node "auto-20220601104837-6708" has status "Ready":"False"
	I0601 10:59:51.108857  183213 node_ready.go:58] node "auto-20220601104837-6708" has status "Ready":"False"
	I0601 10:59:53.109105  183213 node_ready.go:58] node "auto-20220601104837-6708" has status "Ready":"False"
	I0601 10:59:55.608253  183213 node_ready.go:58] node "auto-20220601104837-6708" has status "Ready":"False"
	I0601 10:59:57.608450  183213 node_ready.go:58] node "auto-20220601104837-6708" has status "Ready":"False"
	I0601 10:59:59.608742  183213 node_ready.go:58] node "auto-20220601104837-6708" has status "Ready":"False"
	I0601 11:00:01.609069  183213 node_ready.go:58] node "auto-20220601104837-6708" has status "Ready":"False"
	I0601 11:00:04.108823  183213 node_ready.go:58] node "auto-20220601104837-6708" has status "Ready":"False"
	I0601 11:00:06.608443  183213 node_ready.go:58] node "auto-20220601104837-6708" has status "Ready":"False"
	I0601 11:00:08.608572  183213 node_ready.go:58] node "auto-20220601104837-6708" has status "Ready":"False"
	I0601 11:00:10.608933  183213 node_ready.go:58] node "auto-20220601104837-6708" has status "Ready":"False"
	I0601 11:00:13.108300  183213 node_ready.go:58] node "auto-20220601104837-6708" has status "Ready":"False"
	I0601 11:00:15.108495  183213 node_ready.go:58] node "auto-20220601104837-6708" has status "Ready":"False"
	I0601 11:00:17.607889  183213 node_ready.go:58] node "auto-20220601104837-6708" has status "Ready":"False"
	I0601 11:00:19.608172  183213 node_ready.go:58] node "auto-20220601104837-6708" has status "Ready":"False"
	I0601 11:00:22.108715  183213 node_ready.go:58] node "auto-20220601104837-6708" has status "Ready":"False"
	I0601 11:00:24.109628  183213 node_ready.go:58] node "auto-20220601104837-6708" has status "Ready":"False"
	I0601 11:00:26.608570  183213 node_ready.go:58] node "auto-20220601104837-6708" has status "Ready":"False"
	I0601 11:00:28.609411  183213 node_ready.go:58] node "auto-20220601104837-6708" has status "Ready":"False"
	I0601 11:00:31.107837  183213 node_ready.go:58] node "auto-20220601104837-6708" has status "Ready":"False"
	I0601 11:00:33.108149  183213 node_ready.go:58] node "auto-20220601104837-6708" has status "Ready":"False"
	I0601 11:00:35.608148  183213 node_ready.go:58] node "auto-20220601104837-6708" has status "Ready":"False"
	I0601 11:00:37.608622  183213 node_ready.go:58] node "auto-20220601104837-6708" has status "Ready":"False"
	I0601 11:00:39.608866  183213 node_ready.go:58] node "auto-20220601104837-6708" has status "Ready":"False"
	I0601 11:00:42.108587  183213 node_ready.go:58] node "auto-20220601104837-6708" has status "Ready":"False"
	I0601 11:00:44.608302  183213 node_ready.go:58] node "auto-20220601104837-6708" has status "Ready":"False"
	I0601 11:00:47.108002  183213 node_ready.go:58] node "auto-20220601104837-6708" has status "Ready":"False"
	I0601 11:00:49.108064  183213 node_ready.go:58] node "auto-20220601104837-6708" has status "Ready":"False"
	I0601 11:00:51.108166  183213 node_ready.go:58] node "auto-20220601104837-6708" has status "Ready":"False"
	I0601 11:00:53.607991  183213 node_ready.go:58] node "auto-20220601104837-6708" has status "Ready":"False"
	I0601 11:00:55.608181  183213 node_ready.go:58] node "auto-20220601104837-6708" has status "Ready":"False"
	I0601 11:00:57.608686  183213 node_ready.go:58] node "auto-20220601104837-6708" has status "Ready":"False"
	I0601 11:01:00.108469  183213 node_ready.go:58] node "auto-20220601104837-6708" has status "Ready":"False"
	I0601 11:01:02.608130  183213 node_ready.go:58] node "auto-20220601104837-6708" has status "Ready":"False"
	I0601 11:01:04.608243  183213 node_ready.go:58] node "auto-20220601104837-6708" has status "Ready":"False"
	I0601 11:01:07.108166  183213 node_ready.go:58] node "auto-20220601104837-6708" has status "Ready":"False"
	I0601 11:01:09.108206  183213 node_ready.go:58] node "auto-20220601104837-6708" has status "Ready":"False"
	I0601 11:01:11.108565  183213 node_ready.go:58] node "auto-20220601104837-6708" has status "Ready":"False"
	I0601 11:01:13.607915  183213 node_ready.go:58] node "auto-20220601104837-6708" has status "Ready":"False"
	I0601 11:01:15.608271  183213 node_ready.go:58] node "auto-20220601104837-6708" has status "Ready":"False"
	I0601 11:01:18.108186  183213 node_ready.go:58] node "auto-20220601104837-6708" has status "Ready":"False"
	I0601 11:01:20.608236  183213 node_ready.go:58] node "auto-20220601104837-6708" has status "Ready":"False"
	I0601 11:01:23.107913  183213 node_ready.go:58] node "auto-20220601104837-6708" has status "Ready":"False"
	I0601 11:01:25.109071  183213 node_ready.go:58] node "auto-20220601104837-6708" has status "Ready":"False"
	I0601 11:01:27.608388  183213 node_ready.go:58] node "auto-20220601104837-6708" has status "Ready":"False"
	I0601 11:01:30.108333  183213 node_ready.go:58] node "auto-20220601104837-6708" has status "Ready":"False"
	I0601 11:01:32.108589  183213 node_ready.go:58] node "auto-20220601104837-6708" has status "Ready":"False"
	I0601 11:01:34.608245  183213 node_ready.go:58] node "auto-20220601104837-6708" has status "Ready":"False"
	I0601 11:01:37.107754  183213 node_ready.go:58] node "auto-20220601104837-6708" has status "Ready":"False"
	I0601 11:01:39.108923  183213 node_ready.go:58] node "auto-20220601104837-6708" has status "Ready":"False"
	I0601 11:01:41.611235  183213 node_ready.go:58] node "auto-20220601104837-6708" has status "Ready":"False"
	I0601 11:01:44.109068  183213 node_ready.go:58] node "auto-20220601104837-6708" has status "Ready":"False"
	I0601 11:01:46.608043  183213 node_ready.go:58] node "auto-20220601104837-6708" has status "Ready":"False"
	I0601 11:01:48.608260  183213 node_ready.go:58] node "auto-20220601104837-6708" has status "Ready":"False"
	I0601 11:01:50.608340  183213 node_ready.go:58] node "auto-20220601104837-6708" has status "Ready":"False"
	I0601 11:01:52.608911  183213 node_ready.go:58] node "auto-20220601104837-6708" has status "Ready":"False"
	I0601 11:01:55.108550  183213 node_ready.go:58] node "auto-20220601104837-6708" has status "Ready":"False"
	I0601 11:01:57.608474  183213 node_ready.go:58] node "auto-20220601104837-6708" has status "Ready":"False"
	I0601 11:02:00.108875  183213 node_ready.go:58] node "auto-20220601104837-6708" has status "Ready":"False"
	I0601 11:02:02.609057  183213 node_ready.go:58] node "auto-20220601104837-6708" has status "Ready":"False"
	I0601 11:02:05.108704  183213 node_ready.go:58] node "auto-20220601104837-6708" has status "Ready":"False"
	I0601 11:02:07.108767  183213 node_ready.go:58] node "auto-20220601104837-6708" has status "Ready":"False"
	I0601 11:02:09.608652  183213 node_ready.go:58] node "auto-20220601104837-6708" has status "Ready":"False"
	I0601 11:02:11.608833  183213 node_ready.go:58] node "auto-20220601104837-6708" has status "Ready":"False"
	I0601 11:02:14.108035  183213 node_ready.go:58] node "auto-20220601104837-6708" has status "Ready":"False"
	I0601 11:02:16.108684  183213 node_ready.go:58] node "auto-20220601104837-6708" has status "Ready":"False"
	I0601 11:02:18.108857  183213 node_ready.go:58] node "auto-20220601104837-6708" has status "Ready":"False"
	I0601 11:02:20.608816  183213 node_ready.go:58] node "auto-20220601104837-6708" has status "Ready":"False"
	I0601 11:02:23.108576  183213 node_ready.go:58] node "auto-20220601104837-6708" has status "Ready":"False"
	I0601 11:02:25.108758  183213 node_ready.go:58] node "auto-20220601104837-6708" has status "Ready":"False"
	I0601 11:02:27.607943  183213 node_ready.go:58] node "auto-20220601104837-6708" has status "Ready":"False"
	I0601 11:02:29.608879  183213 node_ready.go:58] node "auto-20220601104837-6708" has status "Ready":"False"
	I0601 11:02:31.608974  183213 node_ready.go:58] node "auto-20220601104837-6708" has status "Ready":"False"
	I0601 11:02:34.108731  183213 node_ready.go:58] node "auto-20220601104837-6708" has status "Ready":"False"
	I0601 11:02:36.608181  183213 node_ready.go:58] node "auto-20220601104837-6708" has status "Ready":"False"
	I0601 11:02:38.608625  183213 node_ready.go:58] node "auto-20220601104837-6708" has status "Ready":"False"
	I0601 11:02:41.109063  183213 node_ready.go:58] node "auto-20220601104837-6708" has status "Ready":"False"
	I0601 11:02:43.608088  183213 node_ready.go:58] node "auto-20220601104837-6708" has status "Ready":"False"
	I0601 11:02:45.608441  183213 node_ready.go:58] node "auto-20220601104837-6708" has status "Ready":"False"
	I0601 11:02:48.108743  183213 node_ready.go:58] node "auto-20220601104837-6708" has status "Ready":"False"
	I0601 11:02:50.109142  183213 node_ready.go:58] node "auto-20220601104837-6708" has status "Ready":"False"
	I0601 11:02:52.608538  183213 node_ready.go:58] node "auto-20220601104837-6708" has status "Ready":"False"
	I0601 11:02:54.608842  183213 node_ready.go:58] node "auto-20220601104837-6708" has status "Ready":"False"
	I0601 11:02:57.108978  183213 node_ready.go:58] node "auto-20220601104837-6708" has status "Ready":"False"
	I0601 11:02:59.608343  183213 node_ready.go:58] node "auto-20220601104837-6708" has status "Ready":"False"
	I0601 11:03:01.608609  183213 node_ready.go:58] node "auto-20220601104837-6708" has status "Ready":"False"
	I0601 11:03:03.608719  183213 node_ready.go:58] node "auto-20220601104837-6708" has status "Ready":"False"
	I0601 11:03:06.107462  183213 node_ready.go:58] node "auto-20220601104837-6708" has status "Ready":"False"
	I0601 11:03:08.108866  183213 node_ready.go:58] node "auto-20220601104837-6708" has status "Ready":"False"
	I0601 11:03:10.608598  183213 node_ready.go:58] node "auto-20220601104837-6708" has status "Ready":"False"
	I0601 11:03:13.108903  183213 node_ready.go:58] node "auto-20220601104837-6708" has status "Ready":"False"
	I0601 11:03:15.109239  183213 node_ready.go:58] node "auto-20220601104837-6708" has status "Ready":"False"
	I0601 11:03:17.607804  183213 node_ready.go:58] node "auto-20220601104837-6708" has status "Ready":"False"
	I0601 11:03:19.608220  183213 node_ready.go:58] node "auto-20220601104837-6708" has status "Ready":"False"
	I0601 11:03:21.608520  183213 node_ready.go:58] node "auto-20220601104837-6708" has status "Ready":"False"
	I0601 11:03:22.610448  183213 node_ready.go:38] duration metric: took 4m0.008161021s waiting for node "auto-20220601104837-6708" to be "Ready" ...
	I0601 11:03:22.612890  183213 out.go:177] 
	W0601 11:03:22.614268  183213 out.go:239] X Exiting due to GUEST_START: wait 5m0s for node: waiting for node to be ready: waitNodeCondition: timed out waiting for the condition
	X Exiting due to GUEST_START: wait 5m0s for node: waiting for node to be ready: waitNodeCondition: timed out waiting for the condition
	W0601 11:03:22.614282  183213 out.go:239] * 
	* 
	W0601 11:03:22.614951  183213 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0601 11:03:22.617227  183213 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:103: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/auto/Start (483.11s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (296.62s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:188: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-20220601105850-6708 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.16.0

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:188: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p old-k8s-version-20220601105850-6708 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.16.0: exit status 80 (4m54.771808755s)

                                                
                                                
-- stdout --
	* [old-k8s-version-20220601105850-6708] minikube v1.26.0-beta.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=14079
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	* Using the docker driver based on user configuration
	* Using Docker driver with the root privilege
	* Starting control plane node old-k8s-version-20220601105850-6708 in cluster old-k8s-version-20220601105850-6708
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	* Preparing Kubernetes v1.16.0 on containerd 1.6.4 ...
	  - kubelet.cni-conf-dir=/etc/cni/net.mk
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner, default-storageclass
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0601 10:58:50.478339  211793 out.go:296] Setting OutFile to fd 1 ...
	I0601 10:58:50.478446  211793 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0601 10:58:50.478456  211793 out.go:309] Setting ErrFile to fd 2...
	I0601 10:58:50.478461  211793 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0601 10:58:50.478583  211793 root.go:322] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/bin
	I0601 10:58:50.478926  211793 out.go:303] Setting JSON to false
	I0601 10:58:50.480837  211793 start.go:115] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":2485,"bootTime":1654078646,"procs":836,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.13.0-1027-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0601 10:58:50.480929  211793 start.go:125] virtualization: kvm guest
	I0601 10:58:50.483500  211793 out.go:177] * [old-k8s-version-20220601105850-6708] minikube v1.26.0-beta.1 on Ubuntu 20.04 (kvm/amd64)
	I0601 10:58:50.484939  211793 out.go:177]   - MINIKUBE_LOCATION=14079
	I0601 10:58:50.484948  211793 notify.go:193] Checking for updates...
	I0601 10:58:50.486542  211793 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0601 10:58:50.488168  211793 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig
	I0601 10:58:50.489706  211793 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube
	I0601 10:58:50.491340  211793 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0601 10:58:50.493438  211793 config.go:178] Loaded profile config "auto-20220601104837-6708": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.6
	I0601 10:58:50.493590  211793 config.go:178] Loaded profile config "calico-20220601104839-6708": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.6
	I0601 10:58:50.493716  211793 config.go:178] Loaded profile config "cilium-20220601104839-6708": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.6
	I0601 10:58:50.493778  211793 driver.go:358] Setting default libvirt URI to qemu:///system
	I0601 10:58:50.547172  211793 docker.go:137] docker version: linux-20.10.16
	I0601 10:58:50.547275  211793 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0601 10:58:50.682318  211793 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:76 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:49 OomKillDisable:true NGoroutines:49 SystemTime:2022-06-01 10:58:50.59021848 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1027-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662795776 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:20.10.16 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Client
Info:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0601 10:58:50.682455  211793 docker.go:254] overlay module found
	I0601 10:58:50.684870  211793 out.go:177] * Using the docker driver based on user configuration
	I0601 10:58:50.686214  211793 start.go:284] selected driver: docker
	I0601 10:58:50.686229  211793 start.go:806] validating driver "docker" against <nil>
	I0601 10:58:50.686246  211793 start.go:817] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0601 10:58:50.687475  211793 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0601 10:58:50.817608  211793 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:76 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:49 OomKillDisable:true NGoroutines:49 SystemTime:2022-06-01 10:58:50.724207964 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1027-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662795776 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:20.10.16 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0601 10:58:50.817761  211793 start_flags.go:292] no existing cluster config was found, will generate one from the flags 
	I0601 10:58:50.818009  211793 start_flags.go:847] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0601 10:58:50.819775  211793 out.go:177] * Using Docker driver with the root privilege
	I0601 10:58:50.821106  211793 cni.go:95] Creating CNI manager for ""
	I0601 10:58:50.821128  211793 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0601 10:58:50.821158  211793 cni.go:225] auto-setting extra-config to "kubelet.cni-conf-dir=/etc/cni/net.mk"
	I0601 10:58:50.821171  211793 cni.go:230] extra-config set to "kubelet.cni-conf-dir=/etc/cni/net.mk"
	I0601 10:58:50.821178  211793 start_flags.go:301] Found "CNI" CNI - setting NetworkPlugin=cni
	I0601 10:58:50.821200  211793 start_flags.go:306] config:
	{Name:old-k8s-version-20220601105850-6708 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-20220601105850-6708 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDom
ain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0601 10:58:50.823859  211793 out.go:177] * Starting control plane node old-k8s-version-20220601105850-6708 in cluster old-k8s-version-20220601105850-6708
	I0601 10:58:50.825180  211793 cache.go:120] Beginning downloading kic base image for docker with containerd
	I0601 10:58:50.826502  211793 out.go:177] * Pulling base image ...
	I0601 10:58:50.827827  211793 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime containerd
	I0601 10:58:50.827855  211793 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a in local docker daemon
	I0601 10:58:50.827919  211793 preload.go:148] Found local preload: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-amd64.tar.lz4
	I0601 10:58:50.827935  211793 cache.go:57] Caching tarball of preloaded images
	I0601 10:58:50.828140  211793 preload.go:174] Found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0601 10:58:50.828162  211793 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on containerd
	I0601 10:58:50.828312  211793 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/old-k8s-version-20220601105850-6708/config.json ...
	I0601 10:58:50.828335  211793 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/old-k8s-version-20220601105850-6708/config.json: {Name:mkd7204fc90ad6784e007f7fa85232594794af06 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0601 10:58:50.880051  211793 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a in local docker daemon, skipping pull
	I0601 10:58:50.880084  211793 cache.go:141] gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a exists in daemon, skipping load
	I0601 10:58:50.880098  211793 cache.go:206] Successfully downloaded all kic artifacts
	I0601 10:58:50.880135  211793 start.go:352] acquiring machines lock for old-k8s-version-20220601105850-6708: {Name:mke14ebe59a9bafbbc986150da3a88f558d9476c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0601 10:58:50.880269  211793 start.go:356] acquired machines lock for "old-k8s-version-20220601105850-6708" in 109.579µs
	I0601 10:58:50.880305  211793 start.go:91] Provisioning new machine with config: &{Name:old-k8s-version-20220601105850-6708 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-20220601105850-6708 N
amespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSiz
e:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false} &{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0601 10:58:50.880415  211793 start.go:131] createHost starting for "" (driver="docker")
	I0601 10:58:50.883486  211793 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0601 10:58:50.883758  211793 start.go:165] libmachine.API.Create for "old-k8s-version-20220601105850-6708" (driver="docker")
	I0601 10:58:50.883794  211793 client.go:168] LocalClient.Create starting
	I0601 10:58:50.883886  211793 main.go:134] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca.pem
	I0601 10:58:50.883930  211793 main.go:134] libmachine: Decoding PEM data...
	I0601 10:58:50.883955  211793 main.go:134] libmachine: Parsing certificate...
	I0601 10:58:50.884029  211793 main.go:134] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/cert.pem
	I0601 10:58:50.884054  211793 main.go:134] libmachine: Decoding PEM data...
	I0601 10:58:50.884072  211793 main.go:134] libmachine: Parsing certificate...
	I0601 10:58:50.884430  211793 cli_runner.go:164] Run: docker network inspect old-k8s-version-20220601105850-6708 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0601 10:58:50.930465  211793 cli_runner.go:211] docker network inspect old-k8s-version-20220601105850-6708 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0601 10:58:50.930554  211793 network_create.go:272] running [docker network inspect old-k8s-version-20220601105850-6708] to gather additional debugging logs...
	I0601 10:58:50.930591  211793 cli_runner.go:164] Run: docker network inspect old-k8s-version-20220601105850-6708
	W0601 10:58:50.967847  211793 cli_runner.go:211] docker network inspect old-k8s-version-20220601105850-6708 returned with exit code 1
	I0601 10:58:50.967895  211793 network_create.go:275] error running [docker network inspect old-k8s-version-20220601105850-6708]: docker network inspect old-k8s-version-20220601105850-6708: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: old-k8s-version-20220601105850-6708
	I0601 10:58:50.967928  211793 network_create.go:277] output of [docker network inspect old-k8s-version-20220601105850-6708]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: old-k8s-version-20220601105850-6708
	
	** /stderr **
	I0601 10:58:50.967987  211793 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0601 10:58:51.001561  211793 network.go:240] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName:br-ab9449f0ea0c IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:97:ae:4d:57}}
	I0601 10:58:51.002571  211793 network.go:288] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.58.0:0xc000010298] misses:0}
	I0601 10:58:51.002616  211793 network.go:235] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0601 10:58:51.002638  211793 network_create.go:115] attempt to create docker network old-k8s-version-20220601105850-6708 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0601 10:58:51.002694  211793 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true old-k8s-version-20220601105850-6708
	I0601 10:58:51.082924  211793 network_create.go:99] docker network old-k8s-version-20220601105850-6708 192.168.58.0/24 created
	I0601 10:58:51.082956  211793 kic.go:106] calculated static IP "192.168.58.2" for the "old-k8s-version-20220601105850-6708" container
	I0601 10:58:51.083021  211793 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0601 10:58:51.117018  211793 cli_runner.go:164] Run: docker volume create old-k8s-version-20220601105850-6708 --label name.minikube.sigs.k8s.io=old-k8s-version-20220601105850-6708 --label created_by.minikube.sigs.k8s.io=true
	I0601 10:58:51.149371  211793 oci.go:103] Successfully created a docker volume old-k8s-version-20220601105850-6708
	I0601 10:58:51.149446  211793 cli_runner.go:164] Run: docker run --rm --name old-k8s-version-20220601105850-6708-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-20220601105850-6708 --entrypoint /usr/bin/test -v old-k8s-version-20220601105850-6708:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a -d /var/lib
	I0601 10:58:51.733007  211793 oci.go:107] Successfully prepared a docker volume old-k8s-version-20220601105850-6708
	I0601 10:58:51.733065  211793 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime containerd
	I0601 10:58:51.733082  211793 kic.go:179] Starting extracting preloaded images to volume ...
	I0601 10:58:51.733138  211793 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-20220601105850-6708:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a -I lz4 -xf /preloaded.tar -C /extractDir
	I0601 10:59:00.607926  211793 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-20220601105850-6708:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a -I lz4 -xf /preloaded.tar -C /extractDir: (8.874719582s)
	I0601 10:59:00.607964  211793 kic.go:188] duration metric: took 8.874879 seconds to extract preloaded images to volume
	W0601 10:59:00.608100  211793 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0601 10:59:00.608228  211793 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0601 10:59:00.744036  211793 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname old-k8s-version-20220601105850-6708 --name old-k8s-version-20220601105850-6708 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-20220601105850-6708 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=old-k8s-version-20220601105850-6708 --network old-k8s-version-20220601105850-6708 --ip 192.168.58.2 --volume old-k8s-version-20220601105850-6708:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a
	I0601 10:59:01.217951  211793 cli_runner.go:164] Run: docker container inspect old-k8s-version-20220601105850-6708 --format={{.State.Running}}
	I0601 10:59:01.269173  211793 cli_runner.go:164] Run: docker container inspect old-k8s-version-20220601105850-6708 --format={{.State.Status}}
	I0601 10:59:01.313590  211793 cli_runner.go:164] Run: docker exec old-k8s-version-20220601105850-6708 stat /var/lib/dpkg/alternatives/iptables
	I0601 10:59:01.394531  211793 oci.go:247] the created container "old-k8s-version-20220601105850-6708" has a running status.
	I0601 10:59:01.394568  211793 kic.go:210] Creating ssh key for kic: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/old-k8s-version-20220601105850-6708/id_rsa...
	I0601 10:59:01.564418  211793 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/old-k8s-version-20220601105850-6708/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0601 10:59:01.683025  211793 cli_runner.go:164] Run: docker container inspect old-k8s-version-20220601105850-6708 --format={{.State.Status}}
	I0601 10:59:01.729084  211793 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0601 10:59:01.729114  211793 kic_runner.go:114] Args: [docker exec --privileged old-k8s-version-20220601105850-6708 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0601 10:59:01.826128  211793 cli_runner.go:164] Run: docker container inspect old-k8s-version-20220601105850-6708 --format={{.State.Status}}
	I0601 10:59:01.870795  211793 machine.go:88] provisioning docker machine ...
	I0601 10:59:01.870842  211793 ubuntu.go:169] provisioning hostname "old-k8s-version-20220601105850-6708"
	I0601 10:59:01.870907  211793 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220601105850-6708
	I0601 10:59:01.915612  211793 main.go:134] libmachine: Using SSH client type: native
	I0601 10:59:01.915839  211793 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7da240] 0x7dd2a0 <nil>  [] 0s} 127.0.0.1 49397 <nil> <nil>}
	I0601 10:59:01.915915  211793 main.go:134] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-20220601105850-6708 && echo "old-k8s-version-20220601105850-6708" | sudo tee /etc/hostname
	I0601 10:59:02.065776  211793 main.go:134] libmachine: SSH cmd err, output: <nil>: old-k8s-version-20220601105850-6708
	
	I0601 10:59:02.065866  211793 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220601105850-6708
	I0601 10:59:02.101392  211793 main.go:134] libmachine: Using SSH client type: native
	I0601 10:59:02.101571  211793 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7da240] 0x7dd2a0 <nil>  [] 0s} 127.0.0.1 49397 <nil> <nil>}
	I0601 10:59:02.101600  211793 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-20220601105850-6708' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-20220601105850-6708/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-20220601105850-6708' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0601 10:59:02.220914  211793 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0601 10:59:02.220947  211793 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/c
erts/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube}
	I0601 10:59:02.220973  211793 ubuntu.go:177] setting up certificates
	I0601 10:59:02.220983  211793 provision.go:83] configureAuth start
	I0601 10:59:02.221039  211793 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-20220601105850-6708
	I0601 10:59:02.253928  211793 provision.go:138] copyHostCerts
	I0601 10:59:02.253977  211793 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.pem, removing ...
	I0601 10:59:02.253984  211793 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.pem
	I0601 10:59:02.282388  211793 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.pem (1082 bytes)
	I0601 10:59:02.282522  211793 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cert.pem, removing ...
	I0601 10:59:02.282538  211793 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cert.pem
	I0601 10:59:02.282577  211793 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cert.pem (1123 bytes)
	I0601 10:59:02.282635  211793 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/key.pem, removing ...
	I0601 10:59:02.282646  211793 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/key.pem
	I0601 10:59:02.282672  211793 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/key.pem (1679 bytes)
	I0601 10:59:02.282994  211793 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-20220601105850-6708 san=[192.168.58.2 127.0.0.1 localhost 127.0.0.1 minikube old-k8s-version-20220601105850-6708]
	I0601 10:59:02.396475  211793 provision.go:172] copyRemoteCerts
	I0601 10:59:02.396540  211793 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0601 10:59:02.396582  211793 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220601105850-6708
	I0601 10:59:02.438716  211793 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49397 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/old-k8s-version-20220601105850-6708/id_rsa Username:docker}
	I0601 10:59:02.529625  211793 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0601 10:59:02.551808  211793 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/server.pem --> /etc/docker/server.pem (1277 bytes)
	I0601 10:59:02.574111  211793 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0601 10:59:02.657453  211793 provision.go:86] duration metric: configureAuth took 436.449809ms
	I0601 10:59:02.657487  211793 ubuntu.go:193] setting minikube options for container-runtime
	I0601 10:59:02.657705  211793 config.go:178] Loaded profile config "old-k8s-version-20220601105850-6708": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.16.0
	I0601 10:59:02.657723  211793 machine.go:91] provisioned docker machine in 786.900303ms
	I0601 10:59:02.657731  211793 client.go:171] LocalClient.Create took 11.773931362s
	I0601 10:59:02.657752  211793 start.go:173] duration metric: libmachine.API.Create for "old-k8s-version-20220601105850-6708" took 11.773989459s
	I0601 10:59:02.657766  211793 start.go:306] post-start starting for "old-k8s-version-20220601105850-6708" (driver="docker")
	I0601 10:59:02.657774  211793 start.go:316] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0601 10:59:02.657845  211793 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0601 10:59:02.657894  211793 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220601105850-6708
	I0601 10:59:02.724821  211793 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49397 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/old-k8s-version-20220601105850-6708/id_rsa Username:docker}
	I0601 10:59:02.817980  211793 ssh_runner.go:195] Run: cat /etc/os-release
	I0601 10:59:02.821708  211793 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0601 10:59:02.821737  211793 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0601 10:59:02.821751  211793 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0601 10:59:02.821759  211793 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0601 10:59:02.821771  211793 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/addons for local assets ...
	I0601 10:59:02.821832  211793 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files for local assets ...
	I0601 10:59:02.821922  211793 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files/etc/ssl/certs/67082.pem -> 67082.pem in /etc/ssl/certs
	I0601 10:59:02.822026  211793 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0601 10:59:02.830473  211793 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files/etc/ssl/certs/67082.pem --> /etc/ssl/certs/67082.pem (1708 bytes)
	I0601 10:59:02.850655  211793 start.go:309] post-start completed in 192.873312ms
	I0601 10:59:02.851044  211793 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-20220601105850-6708
	I0601 10:59:02.911258  211793 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/old-k8s-version-20220601105850-6708/config.json ...
	I0601 10:59:02.911510  211793 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0601 10:59:02.911559  211793 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220601105850-6708
	I0601 10:59:02.951047  211793 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49397 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/old-k8s-version-20220601105850-6708/id_rsa Username:docker}
	I0601 10:59:03.057393  211793 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0601 10:59:03.062440  211793 start.go:134] duration metric: createHost completed in 12.182013832s
	I0601 10:59:03.062460  211793 start.go:81] releasing machines lock for "old-k8s-version-20220601105850-6708", held for 12.182170562s
	I0601 10:59:03.062536  211793 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-20220601105850-6708
	I0601 10:59:03.107832  211793 ssh_runner.go:195] Run: systemctl --version
	I0601 10:59:03.107923  211793 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220601105850-6708
	I0601 10:59:03.107945  211793 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0601 10:59:03.108021  211793 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220601105850-6708
	I0601 10:59:03.154130  211793 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49397 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/old-k8s-version-20220601105850-6708/id_rsa Username:docker}
	I0601 10:59:03.158290  211793 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49397 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/old-k8s-version-20220601105850-6708/id_rsa Username:docker}
	I0601 10:59:03.274153  211793 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0601 10:59:03.289311  211793 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0601 10:59:03.301320  211793 docker.go:187] disabling docker service ...
	I0601 10:59:03.301375  211793 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0601 10:59:03.322783  211793 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0601 10:59:03.334082  211793 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0601 10:59:03.439571  211793 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0601 10:59:03.561572  211793 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0601 10:59:03.571659  211793 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0601 10:59:03.593562  211793 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*sandbox_image = .*$|sandbox_image = "k8s.gcr.io/pause:3.1"|' -i /etc/containerd/config.toml"
	I0601 10:59:03.604294  211793 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*restrict_oom_score_adj = .*$|restrict_oom_score_adj = false|' -i /etc/containerd/config.toml"
	I0601 10:59:03.614649  211793 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*SystemdCgroup = .*$|SystemdCgroup = false|' -i /etc/containerd/config.toml"
	I0601 10:59:03.626138  211793 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*conf_dir = .*$|conf_dir = "/etc/cni/net.mk"|' -i /etc/containerd/config.toml"
	I0601 10:59:03.635138  211793 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^# imports|imports = ["/etc/containerd/containerd.conf.d/02-containerd.conf"]|' -i /etc/containerd/config.toml"
	I0601 10:59:03.649653  211793 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc/containerd/containerd.conf.d && printf %s "dmVyc2lvbiA9IDIK" | base64 -d | sudo tee /etc/containerd/containerd.conf.d/02-containerd.conf"
	I0601 10:59:03.666866  211793 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0601 10:59:03.674191  211793 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0601 10:59:03.681157  211793 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0601 10:59:03.762621  211793 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0601 10:59:03.860079  211793 start.go:447] Will wait 60s for socket path /run/containerd/containerd.sock
	I0601 10:59:03.860160  211793 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0601 10:59:03.864563  211793 start.go:468] Will wait 60s for crictl version
	I0601 10:59:03.864630  211793 ssh_runner.go:195] Run: sudo crictl version
	I0601 10:59:03.899120  211793 retry.go:31] will retry after 11.04660288s: Temporary Error: sudo crictl version: Process exited with status 1
	stdout:
	
	stderr:
	time="2022-06-01T10:59:03Z" level=fatal msg="getting the runtime version: rpc error: code = Unknown desc = server is not initialized yet"
	I0601 10:59:14.949257  211793 ssh_runner.go:195] Run: sudo crictl version
	I0601 10:59:14.982588  211793 start.go:477] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.6.4
	RuntimeApiVersion:  v1alpha2
	I0601 10:59:14.982648  211793 ssh_runner.go:195] Run: containerd --version
	I0601 10:59:15.017491  211793 ssh_runner.go:195] Run: containerd --version
	I0601 10:59:15.053229  211793 out.go:177] * Preparing Kubernetes v1.16.0 on containerd 1.6.4 ...
	I0601 10:59:15.054720  211793 cli_runner.go:164] Run: docker network inspect old-k8s-version-20220601105850-6708 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0601 10:59:15.092256  211793 ssh_runner.go:195] Run: grep 192.168.58.1	host.minikube.internal$ /etc/hosts
	I0601 10:59:15.095601  211793 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.58.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0601 10:59:15.109417  211793 out.go:177]   - kubelet.cni-conf-dir=/etc/cni/net.mk
	I0601 10:59:15.110976  211793 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime containerd
	I0601 10:59:15.111047  211793 ssh_runner.go:195] Run: sudo crictl images --output json
	I0601 10:59:15.134558  211793 containerd.go:547] all images are preloaded for containerd runtime.
	I0601 10:59:15.134578  211793 containerd.go:461] Images already preloaded, skipping extraction
	I0601 10:59:15.134619  211793 ssh_runner.go:195] Run: sudo crictl images --output json
	I0601 10:59:15.156979  211793 containerd.go:547] all images are preloaded for containerd runtime.
	I0601 10:59:15.156998  211793 cache_images.go:84] Images are preloaded, skipping loading
	I0601 10:59:15.157041  211793 ssh_runner.go:195] Run: sudo crictl info
	I0601 10:59:15.179439  211793 cni.go:95] Creating CNI manager for ""
	I0601 10:59:15.179480  211793 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0601 10:59:15.179502  211793 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0601 10:59:15.179530  211793 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.2 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-20220601105850-6708 NodeName:old-k8s-version-20220601105850-6708 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.58.2 CgroupDriver:cgroupfs
ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0601 10:59:15.179708  211793 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "old-k8s-version-20220601105850-6708"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: old-k8s-version-20220601105850-6708
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.58.2:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0601 10:59:15.179841  211793 kubeadm.go:961] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cni-conf-dir=/etc/cni/net.mk --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=old-k8s-version-20220601105850-6708 --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.58.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-20220601105850-6708 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0601 10:59:15.179993  211793 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I0601 10:59:15.188725  211793 binaries.go:44] Found k8s binaries, skipping transfer
	I0601 10:59:15.188797  211793 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0601 10:59:15.195794  211793 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (580 bytes)
	I0601 10:59:15.208430  211793 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0601 10:59:15.221166  211793 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2154 bytes)
	I0601 10:59:15.233322  211793 ssh_runner.go:195] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I0601 10:59:15.235991  211793 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0601 10:59:15.244837  211793 certs.go:54] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/old-k8s-version-20220601105850-6708 for IP: 192.168.58.2
	I0601 10:59:15.244934  211793 certs.go:182] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.key
	I0601 10:59:15.244978  211793 certs.go:182] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/proxy-client-ca.key
	I0601 10:59:15.245038  211793 certs.go:302] generating minikube-user signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/old-k8s-version-20220601105850-6708/client.key
	I0601 10:59:15.245052  211793 crypto.go:68] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/old-k8s-version-20220601105850-6708/client.crt with IP's: []
	I0601 10:59:15.318758  211793 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/old-k8s-version-20220601105850-6708/client.crt ...
	I0601 10:59:15.318794  211793 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/old-k8s-version-20220601105850-6708/client.crt: {Name:mk15c04a6a056b4f1e0a8bd47dafd97a705fd0fc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0601 10:59:15.318997  211793 crypto.go:164] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/old-k8s-version-20220601105850-6708/client.key ...
	I0601 10:59:15.319018  211793 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/old-k8s-version-20220601105850-6708/client.key: {Name:mkc318ad552d98fbd481cea782de42344de14d1b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0601 10:59:15.319143  211793 certs.go:302] generating minikube signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/old-k8s-version-20220601105850-6708/apiserver.key.cee25041
	I0601 10:59:15.319166  211793 crypto.go:68] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/old-k8s-version-20220601105850-6708/apiserver.crt.cee25041 with IP's: [192.168.58.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0601 10:59:15.559123  211793 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/old-k8s-version-20220601105850-6708/apiserver.crt.cee25041 ...
	I0601 10:59:15.559157  211793 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/old-k8s-version-20220601105850-6708/apiserver.crt.cee25041: {Name:mk2bbbbed01fbc35322747567e430b84fcea3cdc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0601 10:59:15.559380  211793 crypto.go:164] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/old-k8s-version-20220601105850-6708/apiserver.key.cee25041 ...
	I0601 10:59:15.559398  211793 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/old-k8s-version-20220601105850-6708/apiserver.key.cee25041: {Name:mk8c5d79eb5523dbab6b2d991ae53e547b11191b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0601 10:59:15.559512  211793 certs.go:320] copying /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/old-k8s-version-20220601105850-6708/apiserver.crt.cee25041 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/old-k8s-version-20220601105850-6708/apiserver.crt
	I0601 10:59:15.559567  211793 certs.go:324] copying /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/old-k8s-version-20220601105850-6708/apiserver.key.cee25041 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/old-k8s-version-20220601105850-6708/apiserver.key
	I0601 10:59:15.559612  211793 certs.go:302] generating aggregator signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/old-k8s-version-20220601105850-6708/proxy-client.key
	I0601 10:59:15.559626  211793 crypto.go:68] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/old-k8s-version-20220601105850-6708/proxy-client.crt with IP's: []
	I0601 10:59:15.731296  211793 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/old-k8s-version-20220601105850-6708/proxy-client.crt ...
	I0601 10:59:15.731324  211793 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/old-k8s-version-20220601105850-6708/proxy-client.crt: {Name:mkdb1c7448a0ecd04a5e1a6d5050de2614056527 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0601 10:59:15.731507  211793 crypto.go:164] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/old-k8s-version-20220601105850-6708/proxy-client.key ...
	I0601 10:59:15.731522  211793 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/old-k8s-version-20220601105850-6708/proxy-client.key: {Name:mk8b97de619fd3d6e3e7c603cf64a32d1c0c902f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0601 10:59:15.731704  211793 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/6708.pem (1338 bytes)
	W0601 10:59:15.731739  211793 certs.go:384] ignoring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/6708_empty.pem, impossibly tiny 0 bytes
	I0601 10:59:15.731753  211793 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca-key.pem (1679 bytes)
	I0601 10:59:15.731775  211793 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca.pem (1082 bytes)
	I0601 10:59:15.731799  211793 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/cert.pem (1123 bytes)
	I0601 10:59:15.731820  211793 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/key.pem (1679 bytes)
	I0601 10:59:15.731858  211793 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files/etc/ssl/certs/67082.pem (1708 bytes)
	I0601 10:59:15.732404  211793 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/old-k8s-version-20220601105850-6708/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0601 10:59:15.750236  211793 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/old-k8s-version-20220601105850-6708/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0601 10:59:15.767489  211793 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/old-k8s-version-20220601105850-6708/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0601 10:59:15.786061  211793 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/old-k8s-version-20220601105850-6708/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0601 10:59:15.802956  211793 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0601 10:59:15.819024  211793 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0601 10:59:15.835425  211793 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0601 10:59:15.851933  211793 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0601 10:59:15.869328  211793 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0601 10:59:15.888946  211793 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/6708.pem --> /usr/share/ca-certificates/6708.pem (1338 bytes)
	I0601 10:59:15.907441  211793 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files/etc/ssl/certs/67082.pem --> /usr/share/ca-certificates/67082.pem (1708 bytes)
	I0601 10:59:15.930178  211793 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0601 10:59:15.944472  211793 ssh_runner.go:195] Run: openssl version
	I0601 10:59:15.949008  211793 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/67082.pem && ln -fs /usr/share/ca-certificates/67082.pem /etc/ssl/certs/67082.pem"
	I0601 10:59:15.956942  211793 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/67082.pem
	I0601 10:59:15.960299  211793 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Jun  1 10:24 /usr/share/ca-certificates/67082.pem
	I0601 10:59:15.960354  211793 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/67082.pem
	I0601 10:59:15.965272  211793 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/67082.pem /etc/ssl/certs/3ec20f2e.0"
	I0601 10:59:15.974102  211793 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0601 10:59:15.981629  211793 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0601 10:59:15.984664  211793 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Jun  1 10:20 /usr/share/ca-certificates/minikubeCA.pem
	I0601 10:59:15.984712  211793 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0601 10:59:15.990002  211793 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0601 10:59:15.997357  211793 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6708.pem && ln -fs /usr/share/ca-certificates/6708.pem /etc/ssl/certs/6708.pem"
	I0601 10:59:16.004435  211793 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6708.pem
	I0601 10:59:16.007434  211793 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Jun  1 10:24 /usr/share/ca-certificates/6708.pem
	I0601 10:59:16.007490  211793 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6708.pem
	I0601 10:59:16.012547  211793 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6708.pem /etc/ssl/certs/51391683.0"
	I0601 10:59:16.020014  211793 kubeadm.go:395] StartCluster: {Name:old-k8s-version-20220601105850-6708 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-20220601105850-6708 Namespace:default APISe
rverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 M
ountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0601 10:59:16.020122  211793 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0601 10:59:16.020162  211793 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0601 10:59:16.043255  211793 cri.go:87] found id: ""
	I0601 10:59:16.043303  211793 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0601 10:59:16.049817  211793 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0601 10:59:16.057019  211793 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0601 10:59:16.057070  211793 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0601 10:59:16.064706  211793 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0601 10:59:16.064756  211793 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0601 10:59:16.477249  211793 out.go:204]   - Generating certificates and keys ...
	I0601 10:59:18.779335  211793 out.go:204]   - Booting up control plane ...
	I0601 10:59:28.320507  211793 out.go:204]   - Configuring RBAC rules ...
	I0601 10:59:28.740163  211793 cni.go:95] Creating CNI manager for ""
	I0601 10:59:28.740190  211793 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0601 10:59:28.742493  211793 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0601 10:59:28.743621  211793 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0601 10:59:28.747152  211793 cni.go:189] applying CNI manifest using /var/lib/minikube/binaries/v1.16.0/kubectl ...
	I0601 10:59:28.747170  211793 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0601 10:59:28.760546  211793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0601 10:59:29.063565  211793 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0601 10:59:29.063632  211793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 10:59:29.063657  211793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl label nodes minikube.k8s.io/version=v1.26.0-beta.1 minikube.k8s.io/commit=4a356b2b7b41c6be3e1e342298908c27bb98ce92 minikube.k8s.io/name=old-k8s-version-20220601105850-6708 minikube.k8s.io/updated_at=2022_06_01T10_59_29_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 10:59:29.070523  211793 ops.go:34] apiserver oom_adj: -16
	I0601 10:59:29.176651  211793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 10:59:29.742201  211793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 10:59:30.241967  211793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 10:59:30.742637  211793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 10:59:31.242652  211793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 10:59:31.742846  211793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 10:59:32.242512  211793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 10:59:32.742776  211793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 10:59:33.241882  211793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 10:59:33.742044  211793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 10:59:34.241998  211793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 10:59:34.742428  211793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 10:59:35.242607  211793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 10:59:35.741880  211793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 10:59:36.242452  211793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 10:59:36.742760  211793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 10:59:37.241796  211793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 10:59:37.742456  211793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 10:59:38.242722  211793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 10:59:38.742391  211793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 10:59:39.242405  211793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 10:59:39.742461  211793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 10:59:40.241981  211793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 10:59:40.742521  211793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 10:59:41.242339  211793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 10:59:41.742615  211793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 10:59:42.242649  211793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 10:59:42.741840  211793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 10:59:43.242754  211793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 10:59:43.742473  211793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 10:59:44.243402  211793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 10:59:44.427056  211793 kubeadm.go:1045] duration metric: took 15.363477236s to wait for elevateKubeSystemPrivileges.
	I0601 10:59:44.427089  211793 kubeadm.go:397] StartCluster complete in 28.407082248s
	I0601 10:59:44.427121  211793 settings.go:142] acquiring lock: {Name:mk20a847233fa50399ab0a24280bffb8d8dbd41b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0601 10:59:44.427230  211793 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig
	I0601 10:59:44.429393  211793 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig: {Name:mkb0e16236e54fbc8651999f1dd70854c53de7db Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0601 10:59:44.962874  211793 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "old-k8s-version-20220601105850-6708" rescaled to 1
	I0601 10:59:44.962939  211793 start.go:208] Will wait 6m0s for node &{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0601 10:59:44.966062  211793 out.go:177] * Verifying Kubernetes components...
	I0601 10:59:44.963076  211793 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0601 10:59:44.963097  211793 addons.go:415] enableAddons start: toEnable=map[], additional=[]
	I0601 10:59:44.963288  211793 config.go:178] Loaded profile config "old-k8s-version-20220601105850-6708": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.16.0
	I0601 10:59:44.967770  211793 addons.go:65] Setting storage-provisioner=true in profile "old-k8s-version-20220601105850-6708"
	I0601 10:59:44.967786  211793 addons.go:153] Setting addon storage-provisioner=true in "old-k8s-version-20220601105850-6708"
	W0601 10:59:44.967791  211793 addons.go:165] addon storage-provisioner should already be in state true
	I0601 10:59:44.967826  211793 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0601 10:59:44.967836  211793 host.go:66] Checking if "old-k8s-version-20220601105850-6708" exists ...
	I0601 10:59:44.967921  211793 addons.go:65] Setting default-storageclass=true in profile "old-k8s-version-20220601105850-6708"
	I0601 10:59:44.967938  211793 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-20220601105850-6708"
	I0601 10:59:44.968262  211793 cli_runner.go:164] Run: docker container inspect old-k8s-version-20220601105850-6708 --format={{.State.Status}}
	I0601 10:59:44.968409  211793 cli_runner.go:164] Run: docker container inspect old-k8s-version-20220601105850-6708 --format={{.State.Status}}
	I0601 10:59:45.031524  211793 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0601 10:59:45.033249  211793 addons.go:348] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0601 10:59:45.033285  211793 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0601 10:59:45.033336  211793 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220601105850-6708
	I0601 10:59:45.058128  211793 addons.go:153] Setting addon default-storageclass=true in "old-k8s-version-20220601105850-6708"
	W0601 10:59:45.058164  211793 addons.go:165] addon default-storageclass should already be in state true
	I0601 10:59:45.058194  211793 host.go:66] Checking if "old-k8s-version-20220601105850-6708" exists ...
	I0601 10:59:45.058699  211793 cli_runner.go:164] Run: docker container inspect old-k8s-version-20220601105850-6708 --format={{.State.Status}}
	I0601 10:59:45.127344  211793 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49397 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/old-k8s-version-20220601105850-6708/id_rsa Username:docker}
	I0601 10:59:45.152105  211793 addons.go:348] installing /etc/kubernetes/addons/storageclass.yaml
	I0601 10:59:45.152133  211793 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0601 10:59:45.152188  211793 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220601105850-6708
	I0601 10:59:45.157232  211793 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-20220601105850-6708" to be "Ready" ...
	I0601 10:59:45.157600  211793 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.58.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0601 10:59:45.210957  211793 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49397 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/old-k8s-version-20220601105850-6708/id_rsa Username:docker}
	I0601 10:59:45.300321  211793 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0601 10:59:45.490575  211793 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0601 10:59:45.775666  211793 start.go:806] {"host.minikube.internal": 192.168.58.1} host record injected into CoreDNS
	I0601 10:59:45.985714  211793 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0601 10:59:45.987194  211793 addons.go:417] enableAddons completed in 1.024111922s
	I0601 10:59:47.166613  211793 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 10:59:49.666454  211793 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 10:59:52.166274  211793 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 10:59:54.166399  211793 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 10:59:56.166447  211793 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 10:59:58.167372  211793 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:00:00.666579  211793 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:00:02.666638  211793 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:00:04.666842  211793 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:00:07.166379  211793 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:00:09.665922  211793 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:00:11.666927  211793 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:00:14.166427  211793 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:00:16.665879  211793 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:00:18.666580  211793 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:00:21.165736  211793 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:00:23.165924  211793 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:00:25.166635  211793 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:00:27.166990  211793 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:00:29.666010  211793 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:00:32.166191  211793 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:00:34.666548  211793 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:00:37.166862  211793 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:00:39.665811  211793 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:00:41.666480  211793 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:00:44.166168  211793 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:00:46.166327  211793 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:00:48.167713  211793 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:00:50.666792  211793 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:00:53.165831  211793 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:00:55.166158  211793 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:00:57.166588  211793 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:00:59.666753  211793 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:01:02.166104  211793 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:01:04.166256  211793 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:01:06.166673  211793 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:01:08.167210  211793 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:01:10.666872  211793 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:01:13.166037  211793 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:01:15.166772  211793 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:01:17.666303  211793 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:01:19.666395  211793 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:01:22.165624  211793 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:01:24.166830  211793 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:01:26.666558  211793 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:01:29.166789  211793 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:01:31.666198  211793 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:01:33.666330  211793 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:01:36.165793  211793 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:01:38.168045  211793 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:01:40.666558  211793 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:01:42.666763  211793 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:01:45.166683  211793 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:01:47.666311  211793 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:01:49.666369  211793 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:01:51.666813  211793 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:01:54.166720  211793 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:01:56.166843  211793 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:01:58.167400  211793 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:02:00.666838  211793 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:02:03.166813  211793 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:02:05.665953  211793 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:02:07.666391  211793 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:02:09.668531  211793 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:02:12.166810  211793 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:02:14.666004  211793 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:02:16.666420  211793 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:02:19.166750  211793 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:02:21.666353  211793 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:02:23.666623  211793 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:02:26.166237  211793 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:02:28.167530  211793 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:02:30.666630  211793 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:02:33.166235  211793 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:02:35.166301  211793 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:02:37.665798  211793 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:02:39.666055  211793 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:02:41.666475  211793 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:02:44.166283  211793 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:02:46.166396  211793 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:02:48.167279  211793 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:02:50.666544  211793 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:02:53.166681  211793 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:02:55.666195  211793 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:02:58.166746  211793 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:03:00.665925  211793 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:03:02.666751  211793 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:03:05.166223  211793 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:03:07.166963  211793 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:03:09.666739  211793 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:03:12.166119  211793 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:03:14.166773  211793 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:03:16.666118  211793 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:03:18.666994  211793 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:03:21.166907  211793 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:03:23.666488  211793 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:03:25.666748  211793 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:03:28.169412  211793 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:03:30.667075  211793 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:03:33.166553  211793 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:03:35.166630  211793 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:03:37.665903  211793 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:03:39.666717  211793 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:03:42.166889  211793 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:03:44.666300  211793 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:03:45.167926  211793 node_ready.go:38] duration metric: took 4m0.010654116s waiting for node "old-k8s-version-20220601105850-6708" to be "Ready" ...
	I0601 11:03:45.169998  211793 out.go:177] 
	W0601 11:03:45.171405  211793 out.go:239] X Exiting due to GUEST_START: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: timed out waiting for the condition
	X Exiting due to GUEST_START: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: timed out waiting for the condition
	W0601 11:03:45.171426  211793 out.go:239] * 
	* 
	W0601 11:03:45.172177  211793 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0601 11:03:45.174263  211793 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:190: failed starting minikube -first start-. args "out/minikube-linux-amd64 start -p old-k8s-version-20220601105850-6708 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.16.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/FirstStart]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-20220601105850-6708
helpers_test.go:235: (dbg) docker inspect old-k8s-version-20220601105850-6708:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "3b070aceb31160fc5815dbff4454d13f7f86eaa9a885ff77acd0f313e36673c0",
	        "Created": "2022-06-01T10:59:00.78565124Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 214562,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-06-01T10:59:01.206141646Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:5fc9565d342f677dd8987c0c7656d8d58147ab45c932c9076935b38a770e4cf7",
	        "ResolvConfPath": "/var/lib/docker/containers/3b070aceb31160fc5815dbff4454d13f7f86eaa9a885ff77acd0f313e36673c0/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/3b070aceb31160fc5815dbff4454d13f7f86eaa9a885ff77acd0f313e36673c0/hostname",
	        "HostsPath": "/var/lib/docker/containers/3b070aceb31160fc5815dbff4454d13f7f86eaa9a885ff77acd0f313e36673c0/hosts",
	        "LogPath": "/var/lib/docker/containers/3b070aceb31160fc5815dbff4454d13f7f86eaa9a885ff77acd0f313e36673c0/3b070aceb31160fc5815dbff4454d13f7f86eaa9a885ff77acd0f313e36673c0-json.log",
	        "Name": "/old-k8s-version-20220601105850-6708",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-20220601105850-6708:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-20220601105850-6708",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/54107ef3bba69957e2904f92616edbbee2b856262d277e35dd0855c862772266-init/diff:/var/lib/docker/overlay2/b8d8db1a2aa7e68bc30cc8784920412f67de75d287f26f41df14fe77e9cd01aa/diff:/var/lib/docker/overlay2/e05914bc392f34941f075ddf1f08984ba653e41980a3a12b3f0ec7bc66faed2c/diff:/var/lib/docker/overlay2/311290be1e68cfaf248422398f2ee037b662e83e8a80c393cf1f35af46c871e3/diff:/var/lib/docker/overlay2/df5bd86ed0e0c9740e1a389f0dd435837b730043ff123ee198aa28f82511f21c/diff:/var/lib/docker/overlay2/52ebf0baed5a1f4378d84fd6b94c9c1756fef7c7837d0766f77ef29b2104f39c/diff:/var/lib/docker/overlay2/19fdc64f1948c51309fb85761a84e7787a06f91f43e3d554c0507c5291bda802/diff:/var/lib/docker/overlay2/8fbe94a5f5ec7e7b87e3294d917aaba04c0d5a1072a4876ca3171f7b416e78d1/diff:/var/lib/docker/overlay2/3f2aec9d3b6e64c9251bd84e36639908a944a57741768984d5a66f5f34ed2288/diff:/var/lib/docker/overlay2/f4b59ae0a52d88a0325a281689c43f769408e14f70b9c94b771e70af140d7538/diff:/var/lib/docker/overlay2/1b9610
0ef86fc38fba7ce796c89f6a0f0c16c7fc1a94ff1a7f2021a01dd5471e/diff:/var/lib/docker/overlay2/10388fac28961ac0e67628b98602c8c77b04d12b21cd25a8d9adc05c1261252b/diff:/var/lib/docker/overlay2/efcc44ba0e0e6acd23e74db49106211785c2519f22b858252b43b17e927095e4/diff:/var/lib/docker/overlay2/3dbc9b9ec4e689c631fadb88ec67ad1f44f1059642f0d9e9740fa4523681476a/diff:/var/lib/docker/overlay2/196519dbdfaceb51fe77bd0517b4a726c63133e2f1a44cc71539d859f920996f/diff:/var/lib/docker/overlay2/bf269014ddb2ded291bc1f7a299a168567e8ffb016d7d4ba5ad3681eb1c988ba/diff:/var/lib/docker/overlay2/dc3a1c86dd14e2cba77aa4a9424d61aa34efc817c2c14b8f79d66fb1c19132ea/diff:/var/lib/docker/overlay2/7151cfa81a30a8e2bacf0e7e97eb21e825844e9ee99d4d774c443cf4df3042bf/diff:/var/lib/docker/overlay2/4827c7b9079e215b353e51dd539d7e28bf73341ea0a494df4654e9fd1b53d16c/diff:/var/lib/docker/overlay2/2da0eda04306aaeacd19a5cc85b885230b88e74f1791bdb022ebf4b39d85fae2/diff:/var/lib/docker/overlay2/1a0bdf8971fb0a44ff0e168623dbbb2d84b8e93f6d20d9351efd68470e4d4851/diff:/var/lib/d
ocker/overlay2/9ced6f582fc4ce00064f1f5e6ac922d838648fe94afc8e015c309e04004f10ca/diff:/var/lib/docker/overlay2/dd6d4c3166eb565aff2516953d5a8877a204214f2436d414475132ae70429cf7/diff:/var/lib/docker/overlay2/a1ace060e85891d54b26ff4be9fcbce36ffbb15cc4061eb4ccf0add8f82783df/diff:/var/lib/docker/overlay2/bc8b93bfba93e7da2c573ae8b6405ebff526153f6a8b0659aebaf044dc7e8f43/diff:/var/lib/docker/overlay2/c6292624b658b5761ddc277e4b50f1bd9d32fb9a2ad4a01647d6482fa0d29eb3/diff:/var/lib/docker/overlay2/cfe8f35eeb0a80a3c747eac0dfd9195da1fa3d9f92f0b7866d46f3517f3b10ee/diff:/var/lib/docker/overlay2/90bc3b9378b5761de7575f1a82d48e4b4ebf50af153eafa5a565585e136b87f8/diff:/var/lib/docker/overlay2/e1b928a2483870df7fdf4adb3b4002f9effe1db7fbff925b24005f47290b2915/diff:/var/lib/docker/overlay2/4758c75ab63fd3f43ae7b654bc93566ded69acea0e92caf3043ef6eeeec9ca1b/diff:/var/lib/docker/overlay2/8558da5809877030d87982052e5011fb04b8bb9646d0f3c1d4aa10f2d7926592/diff:/var/lib/docker/overlay2/f6400cfae811f55736f55064c8949e18ac2dc1175a5149bb0382a79fa92
4f3f3/diff:/var/lib/docker/overlay2/875d5278ff8445b84261d4586c6a14fbbd9c13beff1fe9252591162e4d91a0dc/diff:/var/lib/docker/overlay2/12d9f85a229e1386e37fb609020fdcb5535c67ce811012fd646182559e4ee754/diff:/var/lib/docker/overlay2/d5c5b85272d7a8b0b62da488c8cba8d883a0adcfc9f1b2f6ad2f856b4f13e5f7/diff:/var/lib/docker/overlay2/5f6feb9e059c22491e287804f38f097fda984dc9376ba28ae810e13dcf27394f/diff:/var/lib/docker/overlay2/113a715b1135f09b959295e960aeaa36846ad54a6fe46fdd53d061bc3fe114e3/diff:/var/lib/docker/overlay2/265096a57a98130b8073aa41d0a22f0ada5a391943e490ac1c281a634e22cba0/diff:/var/lib/docker/overlay2/15f57e6dc9a6b382240a9335aae067454048b2eb6d0094f2d3c8c115be34311a/diff:/var/lib/docker/overlay2/45ca8d44d46a41b4493f52c0048d08b8d5ff4c1b28233ab8940f6422df1f1273/diff:/var/lib/docker/overlay2/d555005a13c251ef928389a1b06d251a17378cf3ec68af5a4d6c849412d3f69f/diff:/var/lib/docker/overlay2/80b8c06850dacfe2ca4db4e0fa4d2d0dd6997cf495a7f7da9b95a996694144c5/diff",
	                "MergedDir": "/var/lib/docker/overlay2/54107ef3bba69957e2904f92616edbbee2b856262d277e35dd0855c862772266/merged",
	                "UpperDir": "/var/lib/docker/overlay2/54107ef3bba69957e2904f92616edbbee2b856262d277e35dd0855c862772266/diff",
	                "WorkDir": "/var/lib/docker/overlay2/54107ef3bba69957e2904f92616edbbee2b856262d277e35dd0855c862772266/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-20220601105850-6708",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-20220601105850-6708/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-20220601105850-6708",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-20220601105850-6708",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-20220601105850-6708",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "1256a9334e29c4a4e5495d8f827d7d7664f9ca7db2fab32facb03db36a3b3af6",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49397"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49396"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49393"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49395"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49394"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/1256a9334e29",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-20220601105850-6708": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.58.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "3b070aceb311",
	                        "old-k8s-version-20220601105850-6708"
	                    ],
	                    "NetworkID": "99443bab5d3fa350d07dfff0b6c1624f2cd2601ac21b76ee77d57de53df02f62",
	                    "EndpointID": "f8f8bbe3bd358574febf4fc32d4b04efab03dd462466478278f465336715a20f",
	                    "Gateway": "192.168.58.1",
	                    "IPAddress": "192.168.58.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:3a:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-20220601105850-6708 -n old-k8s-version-20220601105850-6708
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/FirstStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/FirstStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-20220601105850-6708 logs -n 25
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/FirstStart logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------|----------------------------------------|---------|----------------|---------------------|---------------------|
	| Command |                       Args                        |                Profile                 |  User   |    Version     |     Start Time      |      End Time       |
	|---------|---------------------------------------------------|----------------------------------------|---------|----------------|---------------------|---------------------|
	| delete  | -p                                                | force-systemd-flag-20220601105435-6708 | jenkins | v1.26.0-beta.1 | 01 Jun 22 10:55 UTC | 01 Jun 22 10:55 UTC |
	|         | force-systemd-flag-20220601105435-6708            |                                        |         |                |                     |                     |
	| start   | -p kindnet-20220601104838-6708                    | kindnet-20220601104838-6708            | jenkins | v1.26.0-beta.1 | 01 Jun 22 10:55 UTC | 01 Jun 22 10:56 UTC |
	|         | --memory=2048                                     |                                        |         |                |                     |                     |
	|         | --alsologtostderr                                 |                                        |         |                |                     |                     |
	|         | --wait=true --wait-timeout=5m                     |                                        |         |                |                     |                     |
	|         | --cni=kindnet --driver=docker                     |                                        |         |                |                     |                     |
	|         | --container-runtime=containerd                    |                                        |         |                |                     |                     |
	| ssh     | -p kindnet-20220601104838-6708                    | kindnet-20220601104838-6708            | jenkins | v1.26.0-beta.1 | 01 Jun 22 10:56 UTC | 01 Jun 22 10:56 UTC |
	|         | pgrep -a kubelet                                  |                                        |         |                |                     |                     |
	| delete  | -p kindnet-20220601104838-6708                    | kindnet-20220601104838-6708            | jenkins | v1.26.0-beta.1 | 01 Jun 22 10:56 UTC | 01 Jun 22 10:56 UTC |
	| start   | -p                                                | cert-expiration-20220601105338-6708    | jenkins | v1.26.0-beta.1 | 01 Jun 22 10:57 UTC | 01 Jun 22 10:57 UTC |
	|         | cert-expiration-20220601105338-6708               |                                        |         |                |                     |                     |
	|         | --memory=2048                                     |                                        |         |                |                     |                     |
	|         | --cert-expiration=8760h                           |                                        |         |                |                     |                     |
	|         | --driver=docker                                   |                                        |         |                |                     |                     |
	|         | --container-runtime=containerd                    |                                        |         |                |                     |                     |
	| delete  | -p                                                | cert-expiration-20220601105338-6708    | jenkins | v1.26.0-beta.1 | 01 Jun 22 10:57 UTC | 01 Jun 22 10:57 UTC |
	|         | cert-expiration-20220601105338-6708               |                                        |         |                |                     |                     |
	| start   | -p                                                | running-upgrade-20220601105304-6708    | jenkins | v1.26.0-beta.1 | 01 Jun 22 10:57 UTC | 01 Jun 22 10:57 UTC |
	|         | running-upgrade-20220601105304-6708               |                                        |         |                |                     |                     |
	|         | --memory=2200 --alsologtostderr                   |                                        |         |                |                     |                     |
	|         | -v=1 --driver=docker                              |                                        |         |                |                     |                     |
	|         | --container-runtime=containerd                    |                                        |         |                |                     |                     |
	| delete  | -p                                                | running-upgrade-20220601105304-6708    | jenkins | v1.26.0-beta.1 | 01 Jun 22 10:57 UTC | 01 Jun 22 10:57 UTC |
	|         | running-upgrade-20220601105304-6708               |                                        |         |                |                     |                     |
	| start   | -p                                                | enable-default-cni-20220601104837-6708 | jenkins | v1.26.0-beta.1 | 01 Jun 22 10:56 UTC | 01 Jun 22 10:57 UTC |
	|         | enable-default-cni-20220601104837-6708            |                                        |         |                |                     |                     |
	|         | --memory=2048 --alsologtostderr                   |                                        |         |                |                     |                     |
	|         | --wait=true --wait-timeout=5m                     |                                        |         |                |                     |                     |
	|         | --enable-default-cni=true                         |                                        |         |                |                     |                     |
	|         | --driver=docker                                   |                                        |         |                |                     |                     |
	|         | --container-runtime=containerd                    |                                        |         |                |                     |                     |
	| ssh     | -p                                                | enable-default-cni-20220601104837-6708 | jenkins | v1.26.0-beta.1 | 01 Jun 22 10:57 UTC | 01 Jun 22 10:57 UTC |
	|         | enable-default-cni-20220601104837-6708            |                                        |         |                |                     |                     |
	|         | pgrep -a kubelet                                  |                                        |         |                |                     |                     |
	| delete  | -p                                                | enable-default-cni-20220601104837-6708 | jenkins | v1.26.0-beta.1 | 01 Jun 22 10:58 UTC | 01 Jun 22 10:58 UTC |
	|         | enable-default-cni-20220601104837-6708            |                                        |         |                |                     |                     |
	| start   | -p bridge-20220601104837-6708                     | bridge-20220601104837-6708             | jenkins | v1.26.0-beta.1 | 01 Jun 22 10:57 UTC | 01 Jun 22 10:58 UTC |
	|         | --memory=2048                                     |                                        |         |                |                     |                     |
	|         | --alsologtostderr                                 |                                        |         |                |                     |                     |
	|         | --wait=true --wait-timeout=5m                     |                                        |         |                |                     |                     |
	|         | --cni=bridge --driver=docker                      |                                        |         |                |                     |                     |
	|         | --container-runtime=containerd                    |                                        |         |                |                     |                     |
	| ssh     | -p bridge-20220601104837-6708                     | bridge-20220601104837-6708             | jenkins | v1.26.0-beta.1 | 01 Jun 22 10:58 UTC | 01 Jun 22 10:58 UTC |
	|         | pgrep -a kubelet                                  |                                        |         |                |                     |                     |
	| delete  | -p bridge-20220601104837-6708                     | bridge-20220601104837-6708             | jenkins | v1.26.0-beta.1 | 01 Jun 22 10:58 UTC | 01 Jun 22 10:58 UTC |
	| start   | -p calico-20220601104839-6708                     | calico-20220601104839-6708             | jenkins | v1.26.0-beta.1 | 01 Jun 22 10:57 UTC | 01 Jun 22 10:59 UTC |
	|         | --memory=2048                                     |                                        |         |                |                     |                     |
	|         | --alsologtostderr                                 |                                        |         |                |                     |                     |
	|         | --wait=true --wait-timeout=5m                     |                                        |         |                |                     |                     |
	|         | --cni=calico --driver=docker                      |                                        |         |                |                     |                     |
	|         | --container-runtime=containerd                    |                                        |         |                |                     |                     |
	| ssh     | -p calico-20220601104839-6708                     | calico-20220601104839-6708             | jenkins | v1.26.0-beta.1 | 01 Jun 22 10:59 UTC | 01 Jun 22 10:59 UTC |
	|         | pgrep -a kubelet                                  |                                        |         |                |                     |                     |
	| start   | -p cilium-20220601104839-6708                     | cilium-20220601104839-6708             | jenkins | v1.26.0-beta.1 | 01 Jun 22 10:58 UTC | 01 Jun 22 10:59 UTC |
	|         | --memory=2048                                     |                                        |         |                |                     |                     |
	|         | --alsologtostderr                                 |                                        |         |                |                     |                     |
	|         | --wait=true --wait-timeout=5m                     |                                        |         |                |                     |                     |
	|         | --cni=cilium --driver=docker                      |                                        |         |                |                     |                     |
	|         | --container-runtime=containerd                    |                                        |         |                |                     |                     |
	| ssh     | -p cilium-20220601104839-6708                     | cilium-20220601104839-6708             | jenkins | v1.26.0-beta.1 | 01 Jun 22 10:59 UTC | 01 Jun 22 10:59 UTC |
	|         | pgrep -a kubelet                                  |                                        |         |                |                     |                     |
	| delete  | -p cilium-20220601104839-6708                     | cilium-20220601104839-6708             | jenkins | v1.26.0-beta.1 | 01 Jun 22 10:59 UTC | 01 Jun 22 10:59 UTC |
	| start   | -p                                                | no-preload-20220601105939-6708         | jenkins | v1.26.0-beta.1 | 01 Jun 22 10:59 UTC | 01 Jun 22 11:00 UTC |
	|         | no-preload-20220601105939-6708                    |                                        |         |                |                     |                     |
	|         | --memory=2200                                     |                                        |         |                |                     |                     |
	|         | --alsologtostderr                                 |                                        |         |                |                     |                     |
	|         | --wait=true --preload=false                       |                                        |         |                |                     |                     |
	|         | --driver=docker                                   |                                        |         |                |                     |                     |
	|         | --container-runtime=containerd                    |                                        |         |                |                     |                     |
	|         | --kubernetes-version=v1.23.6                      |                                        |         |                |                     |                     |
	| addons  | enable metrics-server -p                          | no-preload-20220601105939-6708         | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:00 UTC | 01 Jun 22 11:00 UTC |
	|         | no-preload-20220601105939-6708                    |                                        |         |                |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                        |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain            |                                        |         |                |                     |                     |
	| stop    | -p                                                | no-preload-20220601105939-6708         | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:00 UTC | 01 Jun 22 11:01 UTC |
	|         | no-preload-20220601105939-6708                    |                                        |         |                |                     |                     |
	|         | --alsologtostderr -v=3                            |                                        |         |                |                     |                     |
	| addons  | enable dashboard -p                               | no-preload-20220601105939-6708         | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:01 UTC | 01 Jun 22 11:01 UTC |
	|         | no-preload-20220601105939-6708                    |                                        |         |                |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                        |         |                |                     |                     |
	| logs    | auto-20220601104837-6708 logs                     | auto-20220601104837-6708               | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:03 UTC | 01 Jun 22 11:03 UTC |
	|         | -n 25                                             |                                        |         |                |                     |                     |
	| delete  | -p auto-20220601104837-6708                       | auto-20220601104837-6708               | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:03 UTC | 01 Jun 22 11:03 UTC |
	|---------|---------------------------------------------------|----------------------------------------|---------|----------------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/06/01 11:03:27
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.18.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0601 11:03:27.299666  232046 out.go:296] Setting OutFile to fd 1 ...
	I0601 11:03:27.299777  232046 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0601 11:03:27.299787  232046 out.go:309] Setting ErrFile to fd 2...
	I0601 11:03:27.299797  232046 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0601 11:03:27.299950  232046 root.go:322] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/bin
	I0601 11:03:27.300231  232046 out.go:303] Setting JSON to false
	I0601 11:03:27.301890  232046 start.go:115] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":2762,"bootTime":1654078646,"procs":770,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.13.0-1027-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0601 11:03:27.301963  232046 start.go:125] virtualization: kvm guest
	I0601 11:03:27.304661  232046 out.go:177] * [embed-certs-20220601110327-6708] minikube v1.26.0-beta.1 on Ubuntu 20.04 (kvm/amd64)
	I0601 11:03:27.306100  232046 out.go:177]   - MINIKUBE_LOCATION=14079
	I0601 11:03:27.307438  232046 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0601 11:03:27.306099  232046 notify.go:193] Checking for updates...
	I0601 11:03:27.308848  232046 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig
	I0601 11:03:27.310220  232046 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube
	I0601 11:03:27.311532  232046 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0601 11:03:27.313329  232046 config.go:178] Loaded profile config "calico-20220601104839-6708": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.6
	I0601 11:03:27.313514  232046 config.go:178] Loaded profile config "no-preload-20220601105939-6708": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.6
	I0601 11:03:27.313633  232046 config.go:178] Loaded profile config "old-k8s-version-20220601105850-6708": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.16.0
	I0601 11:03:27.313685  232046 driver.go:358] Setting default libvirt URI to qemu:///system
	I0601 11:03:27.354261  232046 docker.go:137] docker version: linux-20.10.16
	I0601 11:03:27.354368  232046 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0601 11:03:27.480240  232046 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:76 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:51 OomKillDisable:true NGoroutines:49 SystemTime:2022-06-01 11:03:27.386350535 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1027-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662795776 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:20.10.16 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0601 11:03:27.480347  232046 docker.go:254] overlay module found
	I0601 11:03:27.482738  232046 out.go:177] * Using the docker driver based on user configuration
	I0601 11:03:27.484175  232046 start.go:284] selected driver: docker
	I0601 11:03:27.484191  232046 start.go:806] validating driver "docker" against <nil>
	I0601 11:03:27.484208  232046 start.go:817] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0601 11:03:27.485098  232046 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0601 11:03:27.589407  232046 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:76 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:51 OomKillDisable:true NGoroutines:49 SystemTime:2022-06-01 11:03:27.51589514 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1027-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662795776 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:20.10.16 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Client
Info:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0601 11:03:27.589532  232046 start_flags.go:292] no existing cluster config was found, will generate one from the flags 
	I0601 11:03:27.589672  232046 start_flags.go:847] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0601 11:03:27.591603  232046 out.go:177] * Using Docker driver with the root privilege
	I0601 11:03:27.592928  232046 cni.go:95] Creating CNI manager for ""
	I0601 11:03:27.592942  232046 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0601 11:03:27.592957  232046 cni.go:225] auto-setting extra-config to "kubelet.cni-conf-dir=/etc/cni/net.mk"
	I0601 11:03:27.592967  232046 cni.go:230] extra-config set to "kubelet.cni-conf-dir=/etc/cni/net.mk"
	I0601 11:03:27.592974  232046 start_flags.go:301] Found "CNI" CNI - setting NetworkPlugin=cni
	I0601 11:03:27.592999  232046 start_flags.go:306] config:
	{Name:embed-certs-20220601110327-6708 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:embed-certs-20220601110327-6708 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:clus
ter.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0601 11:03:27.594436  232046 out.go:177] * Starting control plane node embed-certs-20220601110327-6708 in cluster embed-certs-20220601110327-6708
	I0601 11:03:27.595727  232046 cache.go:120] Beginning downloading kic base image for docker with containerd
	I0601 11:03:27.597093  232046 out.go:177] * Pulling base image ...
	I0601 11:03:27.598435  232046 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime containerd
	I0601 11:03:27.598463  232046 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a in local docker daemon
	I0601 11:03:27.598480  232046 preload.go:148] Found local preload: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.6-containerd-overlay2-amd64.tar.lz4
	I0601 11:03:27.598602  232046 cache.go:57] Caching tarball of preloaded images
	I0601 11:03:27.598818  232046 preload.go:174] Found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.6-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0601 11:03:27.598843  232046 cache.go:60] Finished verifying existence of preloaded tar for  v1.23.6 on containerd
	I0601 11:03:27.598939  232046 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/embed-certs-20220601110327-6708/config.json ...
	I0601 11:03:27.598960  232046 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/embed-certs-20220601110327-6708/config.json: {Name:mk24cb3999beb25f5865c696fbad7fc73716c1d0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0601 11:03:27.646499  232046 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a in local docker daemon, skipping pull
	I0601 11:03:27.646524  232046 cache.go:141] gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a exists in daemon, skipping load
	I0601 11:03:27.646533  232046 cache.go:206] Successfully downloaded all kic artifacts
	I0601 11:03:27.646570  232046 start.go:352] acquiring machines lock for embed-certs-20220601110327-6708: {Name:mk2bc8f54b3ac1967b6e5e724f1be8808370dc1a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0601 11:03:27.646698  232046 start.go:356] acquired machines lock for "embed-certs-20220601110327-6708" in 107.872µs
	I0601 11:03:27.646732  232046 start.go:91] Provisioning new machine with config: &{Name:embed-certs-20220601110327-6708 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:embed-certs-20220601110327-6708 Namespace
:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144
MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false} &{Name: IP: Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0601 11:03:27.646835  232046 start.go:131] createHost starting for "" (driver="docker")
	I0601 11:03:25.513549  226685 pod_ready.go:102] pod "metrics-server-b955d9d8-sssvj" in "kube-system" namespace has status "Ready":"False"
	I0601 11:03:28.014466  226685 pod_ready.go:102] pod "metrics-server-b955d9d8-sssvj" in "kube-system" namespace has status "Ready":"False"
	I0601 11:03:25.666748  211793 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:03:28.169412  211793 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:03:27.649304  232046 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0601 11:03:27.649537  232046 start.go:165] libmachine.API.Create for "embed-certs-20220601110327-6708" (driver="docker")
	I0601 11:03:27.649568  232046 client.go:168] LocalClient.Create starting
	I0601 11:03:27.649623  232046 main.go:134] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca.pem
	I0601 11:03:27.649654  232046 main.go:134] libmachine: Decoding PEM data...
	I0601 11:03:27.649673  232046 main.go:134] libmachine: Parsing certificate...
	I0601 11:03:27.649724  232046 main.go:134] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/cert.pem
	I0601 11:03:27.649742  232046 main.go:134] libmachine: Decoding PEM data...
	I0601 11:03:27.649755  232046 main.go:134] libmachine: Parsing certificate...
	I0601 11:03:27.650032  232046 cli_runner.go:164] Run: docker network inspect embed-certs-20220601110327-6708 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0601 11:03:27.681068  232046 cli_runner.go:211] docker network inspect embed-certs-20220601110327-6708 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0601 11:03:27.681136  232046 network_create.go:272] running [docker network inspect embed-certs-20220601110327-6708] to gather additional debugging logs...
	I0601 11:03:27.681161  232046 cli_runner.go:164] Run: docker network inspect embed-certs-20220601110327-6708
	W0601 11:03:27.711044  232046 cli_runner.go:211] docker network inspect embed-certs-20220601110327-6708 returned with exit code 1
	I0601 11:03:27.711071  232046 network_create.go:275] error running [docker network inspect embed-certs-20220601110327-6708]: docker network inspect embed-certs-20220601110327-6708: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: embed-certs-20220601110327-6708
	I0601 11:03:27.711087  232046 network_create.go:277] output of [docker network inspect embed-certs-20220601110327-6708]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: embed-certs-20220601110327-6708
	
	** /stderr **
	I0601 11:03:27.711128  232046 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0601 11:03:27.741548  232046 network.go:240] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName:br-e3f0b201da39 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:75:ce:92:ab}}
	I0601 11:03:27.742201  232046 network.go:240] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName:br-99443bab5d3f IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:02:42:22:72:77:e2}}
	I0601 11:03:27.742658  232046 network.go:240] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName:br-787fac1877c0 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:02:42:00:92:62:57}}
	I0601 11:03:27.743303  232046 network.go:288] reserving subnet 192.168.76.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.76.0:0xc0005248e8] misses:0}
	I0601 11:03:27.743334  232046 network.go:235] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0601 11:03:27.743348  232046 network_create.go:115] attempt to create docker network embed-certs-20220601110327-6708 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I0601 11:03:27.743397  232046 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true embed-certs-20220601110327-6708
	I0601 11:03:27.808157  232046 network_create.go:99] docker network embed-certs-20220601110327-6708 192.168.76.0/24 created
	I0601 11:03:27.808191  232046 kic.go:106] calculated static IP "192.168.76.2" for the "embed-certs-20220601110327-6708" container
	I0601 11:03:27.808249  232046 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0601 11:03:27.842915  232046 cli_runner.go:164] Run: docker volume create embed-certs-20220601110327-6708 --label name.minikube.sigs.k8s.io=embed-certs-20220601110327-6708 --label created_by.minikube.sigs.k8s.io=true
	I0601 11:03:27.874391  232046 oci.go:103] Successfully created a docker volume embed-certs-20220601110327-6708
	I0601 11:03:27.874472  232046 cli_runner.go:164] Run: docker run --rm --name embed-certs-20220601110327-6708-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-20220601110327-6708 --entrypoint /usr/bin/test -v embed-certs-20220601110327-6708:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a -d /var/lib
	I0601 11:03:28.439714  232046 oci.go:107] Successfully prepared a docker volume embed-certs-20220601110327-6708
	I0601 11:03:28.439786  232046 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime containerd
	I0601 11:03:28.439813  232046 kic.go:179] Starting extracting preloaded images to volume ...
	I0601 11:03:28.439924  232046 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.6-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v embed-certs-20220601110327-6708:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a -I lz4 -xf /preloaded.tar -C /extractDir
	I0601 11:03:30.514072  226685 pod_ready.go:102] pod "metrics-server-b955d9d8-sssvj" in "kube-system" namespace has status "Ready":"False"
	I0601 11:03:32.514233  226685 pod_ready.go:102] pod "metrics-server-b955d9d8-sssvj" in "kube-system" namespace has status "Ready":"False"
	I0601 11:03:35.014168  226685 pod_ready.go:102] pod "metrics-server-b955d9d8-sssvj" in "kube-system" namespace has status "Ready":"False"
	I0601 11:03:30.667075  211793 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:03:33.166553  211793 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:03:35.166630  211793 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:03:35.969715  232046 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.6-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v embed-certs-20220601110327-6708:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a -I lz4 -xf /preloaded.tar -C /extractDir: (7.529712266s)
	I0601 11:03:35.969747  232046 kic.go:188] duration metric: took 7.529932 seconds to extract preloaded images to volume
	W0601 11:03:35.969882  232046 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0601 11:03:35.969977  232046 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0601 11:03:36.072639  232046 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname embed-certs-20220601110327-6708 --name embed-certs-20220601110327-6708 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-20220601110327-6708 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=embed-certs-20220601110327-6708 --network embed-certs-20220601110327-6708 --ip 192.168.76.2 --volume embed-certs-20220601110327-6708:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a
	I0601 11:03:36.485272  232046 cli_runner.go:164] Run: docker container inspect embed-certs-20220601110327-6708 --format={{.State.Running}}
	I0601 11:03:36.521985  232046 cli_runner.go:164] Run: docker container inspect embed-certs-20220601110327-6708 --format={{.State.Status}}
	I0601 11:03:36.556366  232046 cli_runner.go:164] Run: docker exec embed-certs-20220601110327-6708 stat /var/lib/dpkg/alternatives/iptables
	I0601 11:03:36.619414  232046 oci.go:247] the created container "embed-certs-20220601110327-6708" has a running status.
	I0601 11:03:36.619447  232046 kic.go:210] Creating ssh key for kic: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/embed-certs-20220601110327-6708/id_rsa...
	I0601 11:03:36.967549  232046 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/embed-certs-20220601110327-6708/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0601 11:03:37.050742  232046 cli_runner.go:164] Run: docker container inspect embed-certs-20220601110327-6708 --format={{.State.Status}}
	I0601 11:03:37.083967  232046 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0601 11:03:37.083990  232046 kic_runner.go:114] Args: [docker exec --privileged embed-certs-20220601110327-6708 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0601 11:03:37.165827  232046 cli_runner.go:164] Run: docker container inspect embed-certs-20220601110327-6708 --format={{.State.Status}}
	I0601 11:03:37.197294  232046 machine.go:88] provisioning docker machine ...
	I0601 11:03:37.197354  232046 ubuntu.go:169] provisioning hostname "embed-certs-20220601110327-6708"
	I0601 11:03:37.197415  232046 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220601110327-6708
	I0601 11:03:37.227438  232046 main.go:134] libmachine: Using SSH client type: native
	I0601 11:03:37.227631  232046 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7da240] 0x7dd2a0 <nil>  [] 0s} 127.0.0.1 49412 <nil> <nil>}
	I0601 11:03:37.227656  232046 main.go:134] libmachine: About to run SSH command:
	sudo hostname embed-certs-20220601110327-6708 && echo "embed-certs-20220601110327-6708" | sudo tee /etc/hostname
	I0601 11:03:37.347961  232046 main.go:134] libmachine: SSH cmd err, output: <nil>: embed-certs-20220601110327-6708
	
	I0601 11:03:37.348021  232046 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220601110327-6708
	I0601 11:03:37.379387  232046 main.go:134] libmachine: Using SSH client type: native
	I0601 11:03:37.379521  232046 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7da240] 0x7dd2a0 <nil>  [] 0s} 127.0.0.1 49412 <nil> <nil>}
	I0601 11:03:37.379565  232046 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-20220601110327-6708' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-20220601110327-6708/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-20220601110327-6708' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0601 11:03:37.499472  232046 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0601 11:03:37.499496  232046 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/c
erts/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube}
	I0601 11:03:37.499515  232046 ubuntu.go:177] setting up certificates
	I0601 11:03:37.499523  232046 provision.go:83] configureAuth start
	I0601 11:03:37.499566  232046 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-20220601110327-6708
	I0601 11:03:37.531509  232046 provision.go:138] copyHostCerts
	I0601 11:03:37.531564  232046 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cert.pem, removing ...
	I0601 11:03:37.531571  232046 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cert.pem
	I0601 11:03:37.531630  232046 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cert.pem (1123 bytes)
	I0601 11:03:37.531696  232046 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/key.pem, removing ...
	I0601 11:03:37.531702  232046 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/key.pem
	I0601 11:03:37.531724  232046 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/key.pem (1679 bytes)
	I0601 11:03:37.531774  232046 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.pem, removing ...
	I0601 11:03:37.531783  232046 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.pem
	I0601 11:03:37.531801  232046 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.pem (1082 bytes)
	I0601 11:03:37.531841  232046 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca-key.pem org=jenkins.embed-certs-20220601110327-6708 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube embed-certs-20220601110327-6708]
	I0601 11:03:37.611352  232046 provision.go:172] copyRemoteCerts
	I0601 11:03:37.611407  232046 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0601 11:03:37.611439  232046 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220601110327-6708
	I0601 11:03:37.642849  232046 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49412 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/embed-certs-20220601110327-6708/id_rsa Username:docker}
	I0601 11:03:37.727077  232046 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/server.pem --> /etc/docker/server.pem (1265 bytes)
	I0601 11:03:37.743826  232046 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0601 11:03:37.760239  232046 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0601 11:03:37.776574  232046 provision.go:86] duration metric: configureAuth took 277.039388ms
	I0601 11:03:37.776599  232046 ubuntu.go:193] setting minikube options for container-runtime
	I0601 11:03:37.776775  232046 config.go:178] Loaded profile config "embed-certs-20220601110327-6708": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.6
	I0601 11:03:37.776793  232046 machine.go:91] provisioned docker machine in 579.472653ms
	I0601 11:03:37.776801  232046 client.go:171] LocalClient.Create took 10.127224112s
	I0601 11:03:37.776825  232046 start.go:173] duration metric: libmachine.API.Create for "embed-certs-20220601110327-6708" took 10.127283476s
	I0601 11:03:37.776838  232046 start.go:306] post-start starting for "embed-certs-20220601110327-6708" (driver="docker")
	I0601 11:03:37.776844  232046 start.go:316] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0601 11:03:37.776882  232046 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0601 11:03:37.776915  232046 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220601110327-6708
	I0601 11:03:37.808696  232046 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49412 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/embed-certs-20220601110327-6708/id_rsa Username:docker}
	I0601 11:03:37.895158  232046 ssh_runner.go:195] Run: cat /etc/os-release
	I0601 11:03:37.897676  232046 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0601 11:03:37.897697  232046 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0601 11:03:37.897712  232046 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0601 11:03:37.897720  232046 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0601 11:03:37.897730  232046 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/addons for local assets ...
	I0601 11:03:37.897783  232046 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files for local assets ...
	I0601 11:03:37.897857  232046 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files/etc/ssl/certs/67082.pem -> 67082.pem in /etc/ssl/certs
	I0601 11:03:37.897957  232046 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0601 11:03:37.904368  232046 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files/etc/ssl/certs/67082.pem --> /etc/ssl/certs/67082.pem (1708 bytes)
	I0601 11:03:37.921323  232046 start.go:309] post-start completed in 144.475564ms
	I0601 11:03:37.921630  232046 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-20220601110327-6708
	I0601 11:03:37.953932  232046 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/embed-certs-20220601110327-6708/config.json ...
	I0601 11:03:37.954186  232046 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0601 11:03:37.954237  232046 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220601110327-6708
	I0601 11:03:37.984421  232046 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49412 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/embed-certs-20220601110327-6708/id_rsa Username:docker}
	I0601 11:03:38.068308  232046 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0601 11:03:38.072096  232046 start.go:134] duration metric: createHost completed in 10.42524994s
	I0601 11:03:38.072124  232046 start.go:81] releasing machines lock for "embed-certs-20220601110327-6708", held for 10.425412714s
	I0601 11:03:38.072205  232046 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-20220601110327-6708
	I0601 11:03:38.103759  232046 ssh_runner.go:195] Run: systemctl --version
	I0601 11:03:38.103804  232046 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220601110327-6708
	I0601 11:03:38.103904  232046 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0601 11:03:38.103971  232046 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220601110327-6708
	I0601 11:03:38.139307  232046 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49412 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/embed-certs-20220601110327-6708/id_rsa Username:docker}
	I0601 11:03:38.140299  232046 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49412 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/embed-certs-20220601110327-6708/id_rsa Username:docker}
	I0601 11:03:38.224785  232046 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0601 11:03:38.249773  232046 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0601 11:03:38.258735  232046 docker.go:187] disabling docker service ...
	I0601 11:03:38.258782  232046 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0601 11:03:38.274556  232046 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0601 11:03:38.283463  232046 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0601 11:03:38.363548  232046 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0601 11:03:38.438994  232046 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0601 11:03:38.447915  232046 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0601 11:03:38.460136  232046 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*sandbox_image = .*$|sandbox_image = "k8s.gcr.io/pause:3.6"|' -i /etc/containerd/config.toml"
	I0601 11:03:38.467936  232046 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*restrict_oom_score_adj = .*$|restrict_oom_score_adj = false|' -i /etc/containerd/config.toml"
	I0601 11:03:38.475365  232046 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*SystemdCgroup = .*$|SystemdCgroup = false|' -i /etc/containerd/config.toml"
	I0601 11:03:38.483074  232046 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*conf_dir = .*$|conf_dir = "/etc/cni/net.mk"|' -i /etc/containerd/config.toml"
	I0601 11:03:38.490614  232046 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^# imports|imports = ["/etc/containerd/containerd.conf.d/02-containerd.conf"]|' -i /etc/containerd/config.toml"
	I0601 11:03:38.498210  232046 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc/containerd/containerd.conf.d && printf %!s(MISSING) "dmVyc2lvbiA9IDIK" | base64 -d | sudo tee /etc/containerd/containerd.conf.d/02-containerd.conf"
	I0601 11:03:38.510391  232046 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0601 11:03:38.517147  232046 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0601 11:03:38.523067  232046 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0601 11:03:38.594580  232046 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0601 11:03:38.672884  232046 start.go:447] Will wait 60s for socket path /run/containerd/containerd.sock
	I0601 11:03:38.672946  232046 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0601 11:03:38.676772  232046 start.go:468] Will wait 60s for crictl version
	I0601 11:03:38.676824  232046 ssh_runner.go:195] Run: sudo crictl version
	I0601 11:03:38.703410  232046 start.go:477] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.6.4
	RuntimeApiVersion:  v1alpha2
	I0601 11:03:38.703469  232046 ssh_runner.go:195] Run: containerd --version
	I0601 11:03:38.730051  232046 ssh_runner.go:195] Run: containerd --version
	I0601 11:03:38.759660  232046 out.go:177] * Preparing Kubernetes v1.23.6 on containerd 1.6.4 ...
	I0601 11:03:38.761313  232046 cli_runner.go:164] Run: docker network inspect embed-certs-20220601110327-6708 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0601 11:03:38.792776  232046 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I0601 11:03:38.796019  232046 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0601 11:03:38.807138  232046 out.go:177]   - kubelet.cni-conf-dir=/etc/cni/net.mk
	I0601 11:03:38.808465  232046 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime containerd
	I0601 11:03:38.808518  232046 ssh_runner.go:195] Run: sudo crictl images --output json
	I0601 11:03:38.831179  232046 containerd.go:547] all images are preloaded for containerd runtime.
	I0601 11:03:38.831201  232046 containerd.go:461] Images already preloaded, skipping extraction
	I0601 11:03:38.831236  232046 ssh_runner.go:195] Run: sudo crictl images --output json
	I0601 11:03:38.854438  232046 containerd.go:547] all images are preloaded for containerd runtime.
	I0601 11:03:38.854456  232046 cache_images.go:84] Images are preloaded, skipping loading
	I0601 11:03:38.854492  232046 ssh_runner.go:195] Run: sudo crictl info
	I0601 11:03:38.877089  232046 cni.go:95] Creating CNI manager for ""
	I0601 11:03:38.877113  232046 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0601 11:03:38.877130  232046 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0601 11:03:38.877147  232046 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.23.6 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-20220601110327-6708 NodeName:embed-certs-20220601110327-6708 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientC
AFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0601 11:03:38.877304  232046 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "embed-certs-20220601110327-6708"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.23.6
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0601 11:03:38.877407  232046 kubeadm.go:961] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.23.6/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cni-conf-dir=/etc/cni/net.mk --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=embed-certs-20220601110327-6708 --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.76.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.23.6 ClusterName:embed-certs-20220601110327-6708 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0601 11:03:38.877460  232046 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.23.6
	I0601 11:03:38.884305  232046 binaries.go:44] Found k8s binaries, skipping transfer
	I0601 11:03:38.884366  232046 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0601 11:03:38.890762  232046 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (576 bytes)
	I0601 11:03:38.902732  232046 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0601 11:03:38.914836  232046 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2060 bytes)
	I0601 11:03:38.926843  232046 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0601 11:03:38.929504  232046 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0601 11:03:38.938158  232046 certs.go:54] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/embed-certs-20220601110327-6708 for IP: 192.168.76.2
	I0601 11:03:38.938248  232046 certs.go:182] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.key
	I0601 11:03:38.938288  232046 certs.go:182] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/proxy-client-ca.key
	I0601 11:03:38.938340  232046 certs.go:302] generating minikube-user signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/embed-certs-20220601110327-6708/client.key
	I0601 11:03:38.938356  232046 crypto.go:68] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/embed-certs-20220601110327-6708/client.crt with IP's: []
	I0601 11:03:39.037676  232046 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/embed-certs-20220601110327-6708/client.crt ...
	I0601 11:03:39.037700  232046 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/embed-certs-20220601110327-6708/client.crt: {Name:mkb482ef7c144c6701a53669ca934f9776ff7e1a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0601 11:03:39.037877  232046 crypto.go:164] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/embed-certs-20220601110327-6708/client.key ...
	I0601 11:03:39.037890  232046 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/embed-certs-20220601110327-6708/client.key: {Name:mk86cf41124d17fab06576d0d1084ed026783a31 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0601 11:03:39.037971  232046 certs.go:302] generating minikube signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/embed-certs-20220601110327-6708/apiserver.key.31bdca25
	I0601 11:03:39.037986  232046 crypto.go:68] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/embed-certs-20220601110327-6708/apiserver.crt.31bdca25 with IP's: [192.168.76.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0601 11:03:39.203094  232046 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/embed-certs-20220601110327-6708/apiserver.crt.31bdca25 ...
	I0601 11:03:39.203119  232046 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/embed-certs-20220601110327-6708/apiserver.crt.31bdca25: {Name:mkbc3d579be8c4dc7924c3d14d84afaa6c55b0d8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0601 11:03:39.203293  232046 crypto.go:164] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/embed-certs-20220601110327-6708/apiserver.key.31bdca25 ...
	I0601 11:03:39.203306  232046 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/embed-certs-20220601110327-6708/apiserver.key.31bdca25: {Name:mk4f3e4a8d736c0ee906145e7cc0096de085b853 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0601 11:03:39.203388  232046 certs.go:320] copying /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/embed-certs-20220601110327-6708/apiserver.crt.31bdca25 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/embed-certs-20220601110327-6708/apiserver.crt
	I0601 11:03:39.203449  232046 certs.go:324] copying /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/embed-certs-20220601110327-6708/apiserver.key.31bdca25 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/embed-certs-20220601110327-6708/apiserver.key
	I0601 11:03:39.203496  232046 certs.go:302] generating aggregator signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/embed-certs-20220601110327-6708/proxy-client.key
	I0601 11:03:39.203509  232046 crypto.go:68] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/embed-certs-20220601110327-6708/proxy-client.crt with IP's: []
	I0601 11:03:39.412522  232046 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/embed-certs-20220601110327-6708/proxy-client.crt ...
	I0601 11:03:39.412549  232046 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/embed-certs-20220601110327-6708/proxy-client.crt: {Name:mk3219f008b9eb06ac1b5c7c488bd5ddb176e387 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0601 11:03:39.412738  232046 crypto.go:164] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/embed-certs-20220601110327-6708/proxy-client.key ...
	I0601 11:03:39.412751  232046 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/embed-certs-20220601110327-6708/proxy-client.key: {Name:mkf15b71171093bbb6fa967d367fb967f44f4dab Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0601 11:03:39.412919  232046 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/6708.pem (1338 bytes)
	W0601 11:03:39.412954  232046 certs.go:384] ignoring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/6708_empty.pem, impossibly tiny 0 bytes
	I0601 11:03:39.412968  232046 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca-key.pem (1679 bytes)
	I0601 11:03:39.412995  232046 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca.pem (1082 bytes)
	I0601 11:03:39.413022  232046 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/cert.pem (1123 bytes)
	I0601 11:03:39.413055  232046 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/key.pem (1679 bytes)
	I0601 11:03:39.413097  232046 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files/etc/ssl/certs/67082.pem (1708 bytes)
	I0601 11:03:39.413629  232046 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/embed-certs-20220601110327-6708/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0601 11:03:39.431855  232046 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/embed-certs-20220601110327-6708/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0601 11:03:39.448411  232046 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/embed-certs-20220601110327-6708/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0601 11:03:39.465213  232046 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/embed-certs-20220601110327-6708/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0601 11:03:39.481416  232046 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0601 11:03:39.500003  232046 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0601 11:03:39.517883  232046 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0601 11:03:39.534165  232046 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0601 11:03:39.550900  232046 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files/etc/ssl/certs/67082.pem --> /usr/share/ca-certificates/67082.pem (1708 bytes)
	I0601 11:03:39.569091  232046 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0601 11:03:39.586565  232046 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/6708.pem --> /usr/share/ca-certificates/6708.pem (1338 bytes)
	I0601 11:03:39.603409  232046 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0601 11:03:39.615673  232046 ssh_runner.go:195] Run: openssl version
	I0601 11:03:39.620205  232046 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0601 11:03:39.627365  232046 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0601 11:03:39.630360  232046 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Jun  1 10:20 /usr/share/ca-certificates/minikubeCA.pem
	I0601 11:03:39.630404  232046 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0601 11:03:39.635070  232046 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0601 11:03:39.642021  232046 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6708.pem && ln -fs /usr/share/ca-certificates/6708.pem /etc/ssl/certs/6708.pem"
	I0601 11:03:39.649356  232046 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6708.pem
	I0601 11:03:39.652354  232046 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Jun  1 10:24 /usr/share/ca-certificates/6708.pem
	I0601 11:03:39.652403  232046 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6708.pem
	I0601 11:03:39.657214  232046 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6708.pem /etc/ssl/certs/51391683.0"
	I0601 11:03:39.664554  232046 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/67082.pem && ln -fs /usr/share/ca-certificates/67082.pem /etc/ssl/certs/67082.pem"
	I0601 11:03:39.672005  232046 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/67082.pem
	I0601 11:03:39.675027  232046 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Jun  1 10:24 /usr/share/ca-certificates/67082.pem
	I0601 11:03:39.675084  232046 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/67082.pem
	I0601 11:03:39.680552  232046 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/67082.pem /etc/ssl/certs/3ec20f2e.0"
	I0601 11:03:39.687796  232046 kubeadm.go:395] StartCluster: {Name:embed-certs-20220601110327-6708 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:embed-certs-20220601110327-6708 Namespace:default APIServerName
:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0601 11:03:39.687929  232046 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0601 11:03:39.687982  232046 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0601 11:03:39.713887  232046 cri.go:87] found id: ""
	I0601 11:03:39.713948  232046 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0601 11:03:39.720682  232046 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0601 11:03:39.727314  232046 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0601 11:03:39.727367  232046 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0601 11:03:39.733877  232046 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0601 11:03:39.733909  232046 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0601 11:03:37.016549  226685 pod_ready.go:102] pod "metrics-server-b955d9d8-sssvj" in "kube-system" namespace has status "Ready":"False"
	I0601 11:03:39.513933  226685 pod_ready.go:102] pod "metrics-server-b955d9d8-sssvj" in "kube-system" namespace has status "Ready":"False"
	I0601 11:03:37.665903  211793 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:03:39.666717  211793 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:03:39.981267  232046 out.go:204]   - Generating certificates and keys ...
	I0601 11:03:42.166889  211793 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:03:44.666300  211793 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:03:45.167926  211793 node_ready.go:38] duration metric: took 4m0.010654116s waiting for node "old-k8s-version-20220601105850-6708" to be "Ready" ...
	I0601 11:03:45.169998  211793 out.go:177] 
	W0601 11:03:45.171405  211793 out.go:239] X Exiting due to GUEST_START: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: timed out waiting for the condition
	W0601 11:03:45.171426  211793 out.go:239] * 
	W0601 11:03:45.172177  211793 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0601 11:03:45.174263  211793 out.go:177] 
	I0601 11:03:41.514440  226685 pod_ready.go:102] pod "metrics-server-b955d9d8-sssvj" in "kube-system" namespace has status "Ready":"False"
	I0601 11:03:44.013676  226685 pod_ready.go:102] pod "metrics-server-b955d9d8-sssvj" in "kube-system" namespace has status "Ready":"False"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID
	df2a3875ec723       6de166512aa22       About a minute ago   Running             kindnet-cni               1                   0a6a4ae8178de
	8f9cec9f497f7       6de166512aa22       4 minutes ago        Exited              kindnet-cni               0                   0a6a4ae8178de
	01651d3598805       c21b0c7400f98       4 minutes ago        Running             kube-proxy                0                   f1d9aedf42d24
	0b9cf8973c884       b305571ca60a5       4 minutes ago        Running             kube-apiserver            0                   ac769aefe340a
	f18885873e44e       06a629a7e51cd       4 minutes ago        Running             kube-controller-manager   0                   3736e1d98ec61
	92f272874915c       b2756210eeabf       4 minutes ago        Running             etcd                      0                   41c0131fc288d
	e4d08ecd5adee       301ddc62b80b1       4 minutes ago        Running             kube-scheduler            0                   0a47511bd2aec
	
	* 
	* ==> containerd <==
	* -- Logs begin at Wed 2022-06-01 10:59:01 UTC, end at Wed 2022-06-01 11:03:46 UTC. --
	Jun 01 10:59:45 old-k8s-version-20220601105850-6708 containerd[511]: time="2022-06-01T10:59:45.011078791Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 01 10:59:45 old-k8s-version-20220601105850-6708 containerd[511]: time="2022-06-01T10:59:45.011125208Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 01 10:59:45 old-k8s-version-20220601105850-6708 containerd[511]: time="2022-06-01T10:59:45.011287003Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/0a6a4ae8178de388eac34f80c746eb474698ca59fb55ee2a3b96f3fe0be6b4cb pid=1819 runtime=io.containerd.runc.v2
	Jun 01 10:59:45 old-k8s-version-20220601105850-6708 containerd[511]: time="2022-06-01T10:59:45.024102593Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 01 10:59:45 old-k8s-version-20220601105850-6708 containerd[511]: time="2022-06-01T10:59:45.024196478Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 01 10:59:45 old-k8s-version-20220601105850-6708 containerd[511]: time="2022-06-01T10:59:45.024210190Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 01 10:59:45 old-k8s-version-20220601105850-6708 containerd[511]: time="2022-06-01T10:59:45.024999048Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/f1d9aedf42d247b48ceff0c7c242db89a09fbc660a6d23ffbbf6dabd0853cd31 pid=1834 runtime=io.containerd.runc.v2
	Jun 01 10:59:45 old-k8s-version-20220601105850-6708 containerd[511]: time="2022-06-01T10:59:45.156570891Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-9db28,Uid:8cae7678-59a9-4d84-b561-a852eacc0638,Namespace:kube-system,Attempt:0,} returns sandbox id \"f1d9aedf42d247b48ceff0c7c242db89a09fbc660a6d23ffbbf6dabd0853cd31\""
	Jun 01 10:59:45 old-k8s-version-20220601105850-6708 containerd[511]: time="2022-06-01T10:59:45.171209723Z" level=info msg="CreateContainer within sandbox \"f1d9aedf42d247b48ceff0c7c242db89a09fbc660a6d23ffbbf6dabd0853cd31\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}"
	Jun 01 10:59:45 old-k8s-version-20220601105850-6708 containerd[511]: time="2022-06-01T10:59:45.205117287Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kindnet-rvdm8,Uid:0648d955-2d20-449d-88b9-57fb087825d8,Namespace:kube-system,Attempt:0,} returns sandbox id \"0a6a4ae8178de388eac34f80c746eb474698ca59fb55ee2a3b96f3fe0be6b4cb\""
	Jun 01 10:59:45 old-k8s-version-20220601105850-6708 containerd[511]: time="2022-06-01T10:59:45.211377007Z" level=info msg="CreateContainer within sandbox \"0a6a4ae8178de388eac34f80c746eb474698ca59fb55ee2a3b96f3fe0be6b4cb\" for container &ContainerMetadata{Name:kindnet-cni,Attempt:0,}"
	Jun 01 10:59:45 old-k8s-version-20220601105850-6708 containerd[511]: time="2022-06-01T10:59:45.236005507Z" level=info msg="CreateContainer within sandbox \"f1d9aedf42d247b48ceff0c7c242db89a09fbc660a6d23ffbbf6dabd0853cd31\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"01651d3598805140172b9f0f86349cd8cad0f336647501ce25f9120bcb1f7dc3\""
	Jun 01 10:59:45 old-k8s-version-20220601105850-6708 containerd[511]: time="2022-06-01T10:59:45.253664437Z" level=info msg="StartContainer for \"01651d3598805140172b9f0f86349cd8cad0f336647501ce25f9120bcb1f7dc3\""
	Jun 01 10:59:45 old-k8s-version-20220601105850-6708 containerd[511]: time="2022-06-01T10:59:45.260859835Z" level=info msg="CreateContainer within sandbox \"0a6a4ae8178de388eac34f80c746eb474698ca59fb55ee2a3b96f3fe0be6b4cb\" for &ContainerMetadata{Name:kindnet-cni,Attempt:0,} returns container id \"8f9cec9f497f70922114b5778ad83667fefb19394aa9a2008cd70a55ebd910b6\""
	Jun 01 10:59:45 old-k8s-version-20220601105850-6708 containerd[511]: time="2022-06-01T10:59:45.264644449Z" level=info msg="StartContainer for \"8f9cec9f497f70922114b5778ad83667fefb19394aa9a2008cd70a55ebd910b6\""
	Jun 01 10:59:45 old-k8s-version-20220601105850-6708 containerd[511]: time="2022-06-01T10:59:45.488092338Z" level=info msg="StartContainer for \"01651d3598805140172b9f0f86349cd8cad0f336647501ce25f9120bcb1f7dc3\" returns successfully"
	Jun 01 10:59:45 old-k8s-version-20220601105850-6708 containerd[511]: time="2022-06-01T10:59:45.509208422Z" level=info msg="StartContainer for \"8f9cec9f497f70922114b5778ad83667fefb19394aa9a2008cd70a55ebd910b6\" returns successfully"
	Jun 01 11:02:25 old-k8s-version-20220601105850-6708 containerd[511]: time="2022-06-01T11:02:25.799813525Z" level=info msg="shim disconnected" id=8f9cec9f497f70922114b5778ad83667fefb19394aa9a2008cd70a55ebd910b6
	Jun 01 11:02:25 old-k8s-version-20220601105850-6708 containerd[511]: time="2022-06-01T11:02:25.799933685Z" level=warning msg="cleaning up after shim disconnected" id=8f9cec9f497f70922114b5778ad83667fefb19394aa9a2008cd70a55ebd910b6 namespace=k8s.io
	Jun 01 11:02:25 old-k8s-version-20220601105850-6708 containerd[511]: time="2022-06-01T11:02:25.799952943Z" level=info msg="cleaning up dead shim"
	Jun 01 11:02:25 old-k8s-version-20220601105850-6708 containerd[511]: time="2022-06-01T11:02:25.808993591Z" level=warning msg="cleanup warnings time=\"2022-06-01T11:02:25Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2465 runtime=io.containerd.runc.v2\n"
	Jun 01 11:02:25 old-k8s-version-20220601105850-6708 containerd[511]: time="2022-06-01T11:02:25.831034979Z" level=info msg="CreateContainer within sandbox \"0a6a4ae8178de388eac34f80c746eb474698ca59fb55ee2a3b96f3fe0be6b4cb\" for container &ContainerMetadata{Name:kindnet-cni,Attempt:1,}"
	Jun 01 11:02:25 old-k8s-version-20220601105850-6708 containerd[511]: time="2022-06-01T11:02:25.844249811Z" level=info msg="CreateContainer within sandbox \"0a6a4ae8178de388eac34f80c746eb474698ca59fb55ee2a3b96f3fe0be6b4cb\" for &ContainerMetadata{Name:kindnet-cni,Attempt:1,} returns container id \"df2a3875ec723f9785edea449611ef162e14636786905cd989570b375ffed8b4\""
	Jun 01 11:02:25 old-k8s-version-20220601105850-6708 containerd[511]: time="2022-06-01T11:02:25.844714128Z" level=info msg="StartContainer for \"df2a3875ec723f9785edea449611ef162e14636786905cd989570b375ffed8b4\""
	Jun 01 11:02:26 old-k8s-version-20220601105850-6708 containerd[511]: time="2022-06-01T11:02:26.057278513Z" level=info msg="StartContainer for \"df2a3875ec723f9785edea449611ef162e14636786905cd989570b375ffed8b4\" returns successfully"
	
	* 
	* ==> describe nodes <==
	* Name:               old-k8s-version-20220601105850-6708
	Roles:              master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-20220601105850-6708
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=4a356b2b7b41c6be3e1e342298908c27bb98ce92
	                    minikube.k8s.io/name=old-k8s-version-20220601105850-6708
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2022_06_01T10_59_29_0700
	                    minikube.k8s.io/version=v1.26.0-beta.1
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 01 Jun 2022 10:59:23 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 01 Jun 2022 11:03:23 +0000   Wed, 01 Jun 2022 10:59:20 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 01 Jun 2022 11:03:23 +0000   Wed, 01 Jun 2022 10:59:20 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 01 Jun 2022 11:03:23 +0000   Wed, 01 Jun 2022 10:59:20 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Wed, 01 Jun 2022 11:03:23 +0000   Wed, 01 Jun 2022 10:59:20 +0000   KubeletNotReady              runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Addresses:
	  InternalIP:  192.168.58.2
	  Hostname:    old-k8s-version-20220601105850-6708
	Capacity:
	 cpu:                8
	 ephemeral-storage:  304695084Ki
	 hugepages-1Gi:      0
	 hugepages-2Mi:      0
	 memory:             32873824Ki
	 pods:               110
	Allocatable:
	 cpu:                8
	 ephemeral-storage:  304695084Ki
	 hugepages-1Gi:      0
	 hugepages-2Mi:      0
	 memory:             32873824Ki
	 pods:               110
	System Info:
	 Machine ID:                 e0d7477b601740b2a7c32c13851e505c
	 System UUID:                cf752223-716a-46c7-b06a-74cba9af00dc
	 Boot ID:                    6ec46557-2d76-4d83-8353-3ee04fd961c4
	 Kernel Version:             5.13.0-1027-gcp
	 OS Image:                   Ubuntu 20.04.4 LTS
	 Operating System:           linux
	 Architecture:               amd64
	 Container Runtime Version:  containerd://1.6.4
	 Kubelet Version:            v1.16.0
	 Kube-Proxy Version:         v1.16.0
	PodCIDR:                     10.244.0.0/24
	PodCIDRs:                    10.244.0.0/24
	Non-terminated Pods:         (6 in total)
	  Namespace                  Name                                                           CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                  ----                                                           ------------  ----------  ---------------  -------------  ---
	  kube-system                etcd-old-k8s-version-20220601105850-6708                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m56s
	  kube-system                kindnet-rvdm8                                                  100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      4m2s
	  kube-system                kube-apiserver-old-k8s-version-20220601105850-6708             250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m13s
	  kube-system                kube-controller-manager-old-k8s-version-20220601105850-6708    200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m58s
	  kube-system                kube-proxy-9db28                                               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m2s
	  kube-system                kube-scheduler-old-k8s-version-20220601105850-6708             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m3s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                650m (8%!)(MISSING)  100m (1%!)(MISSING)
	  memory             50Mi (0%!)(MISSING)  50Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From                                             Message
	  ----    ------                   ----                   ----                                             -------
	  Normal  NodeHasSufficientMemory  4m28s (x8 over 4m28s)  kubelet, old-k8s-version-20220601105850-6708     Node old-k8s-version-20220601105850-6708 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m28s (x8 over 4m28s)  kubelet, old-k8s-version-20220601105850-6708     Node old-k8s-version-20220601105850-6708 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m28s (x7 over 4m28s)  kubelet, old-k8s-version-20220601105850-6708     Node old-k8s-version-20220601105850-6708 status is now: NodeHasSufficientPID
	  Normal  Starting                 4m1s                   kube-proxy, old-k8s-version-20220601105850-6708  Starting kube-proxy.
	
	* 
	* ==> dmesg <==
	* [Jun 1 10:57] IPv4: martian source 10.244.0.2 from 10.244.0.2, on dev bridge
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff ce 23 50 3b c9 fd 08 06
	[  +0.595787] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ce 23 50 3b c9 fd 08 06
	[  +0.586201] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 66 8a d2 ee 2a 9a 08 06
	[Jun 1 10:58] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 9a c7 83 3b 8c 1d 08 06
	[  +0.000302] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 66 8a d2 ee 2a 9a 08 06
	[ +12.725641] IPv4: martian source 10.244.0.2 from 10.244.0.2, on dev bridge
	[  +0.000013] ll header: 00000000: ff ff ff ff ff ff 3a 82 da d0 8c 8d 08 06
	[  +0.906380] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 3a 82 da d0 8c 8d 08 06
	[  +0.302806] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff aa d1 05 1a ef 66 08 06
	[ +13.606547] IPv4: martian source 10.85.0.2 from 10.85.0.2, on dev cni0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 7e ad f3 21 27 5b 08 06
	[  +8.626375] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff f6 c1 ef be 01 dd 08 06
	[  +0.000348] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff aa d1 05 1a ef 66 08 06
	[  +8.278909] IPv4: martian source 10.85.0.2 from 10.85.0.2, on dev cni0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 72 fb cd eb 09 11 08 06
	[Jun 1 11:01] IPv4: martian destination 127.0.0.11 from 10.244.0.2, dev vethb04b9442
	
	* 
	* ==> etcd [92f272874915c4877257c68e1d43539f7183cbef97f4b0837113afe72f1cdb3c] <==
	* 2022-06-01 10:59:19.497728 I | raft: b2c6679ac05f2cf1 became follower at term 0
	2022-06-01 10:59:19.497735 I | raft: newRaft b2c6679ac05f2cf1 [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0]
	2022-06-01 10:59:19.497738 I | raft: b2c6679ac05f2cf1 became follower at term 1
	2022-06-01 10:59:19.557971 W | auth: simple token is not cryptographically signed
	2022-06-01 10:59:19.561258 I | etcdserver: starting server... [version: 3.3.15, cluster version: to_be_decided]
	2022-06-01 10:59:19.561609 I | etcdserver: b2c6679ac05f2cf1 as single-node; fast-forwarding 9 ticks (election ticks 10)
	2022-06-01 10:59:19.561830 I | etcdserver/membership: added member b2c6679ac05f2cf1 [https://192.168.58.2:2380] to cluster 3a56e4ca95e2355c
	2022-06-01 10:59:19.563596 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, ca = , trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	2022-06-01 10:59:19.563780 I | embed: listening for metrics on http://192.168.58.2:2381
	2022-06-01 10:59:19.563857 I | embed: listening for metrics on http://127.0.0.1:2381
	2022-06-01 10:59:20.398057 I | raft: b2c6679ac05f2cf1 is starting a new election at term 1
	2022-06-01 10:59:20.398087 I | raft: b2c6679ac05f2cf1 became candidate at term 2
	2022-06-01 10:59:20.398113 I | raft: b2c6679ac05f2cf1 received MsgVoteResp from b2c6679ac05f2cf1 at term 2
	2022-06-01 10:59:20.398122 I | raft: b2c6679ac05f2cf1 became leader at term 2
	2022-06-01 10:59:20.398127 I | raft: raft.node: b2c6679ac05f2cf1 elected leader b2c6679ac05f2cf1 at term 2
	2022-06-01 10:59:20.398431 I | etcdserver: published {Name:old-k8s-version-20220601105850-6708 ClientURLs:[https://192.168.58.2:2379]} to cluster 3a56e4ca95e2355c
	2022-06-01 10:59:20.398459 I | embed: ready to serve client requests
	2022-06-01 10:59:20.398511 I | embed: ready to serve client requests
	2022-06-01 10:59:20.398527 I | etcdserver: setting up the initial cluster version to 3.3
	2022-06-01 10:59:20.399286 N | etcdserver/membership: set the initial cluster version to 3.3
	2022-06-01 10:59:20.399361 I | etcdserver/api: enabled capabilities for version 3.3
	2022-06-01 10:59:20.400666 I | embed: serving client requests on 192.168.58.2:2379
	2022-06-01 10:59:20.401288 I | embed: serving client requests on 127.0.0.1:2379
	2022-06-01 11:00:27.079535 W | etcdserver: request "header:<ID:3238511576856218971 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/masterleases/192.168.58.2\" mod_revision:394 > success:<request_put:<key:\"/registry/masterleases/192.168.58.2\" value_size:67 lease:3238511576856218969 >> failure:<request_range:<key:\"/registry/masterleases/192.168.58.2\" > >>" with result "size:16" took too long (105.707876ms) to execute
	2022-06-01 11:00:27.370158 W | etcdserver: read-only range request "key:\"/registry/apiregistration.k8s.io/apiservices\" range_end:\"/registry/apiregistration.k8s.io/apiservicet\" count_only:true " with result "range_response_count:0 size:7" took too long (109.381517ms) to execute
	
	* 
	* ==> kernel <==
	*  11:03:46 up 46 min,  0 users,  load average: 1.18, 2.12, 2.03
	Linux old-k8s-version-20220601105850-6708 5.13.0-1027-gcp #32~20.04.1-Ubuntu SMP Thu May 26 10:53:08 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.4 LTS"
	
	* 
	* ==> kube-apiserver [0b9cf8973c8844f5d3f241696625e5764fbd79a0c0fa64202fca8a67567e726a] <==
	* I0601 10:59:23.500638       1 establishing_controller.go:73] Starting EstablishingController
	I0601 10:59:23.500713       1 nonstructuralschema_controller.go:191] Starting NonStructuralSchemaConditionController
	I0601 10:59:23.500739       1 apiapproval_controller.go:185] Starting KubernetesAPIApprovalPolicyConformantConditionController
	E0601 10:59:23.502269       1 controller.go:154] Unable to remove old endpoints from kubernetes service: StorageError: key not found, Code: 1, Key: /registry/masterleases/192.168.58.2, ResourceVersion: 0, AdditionalErrorMsg: 
	I0601 10:59:23.600240       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0601 10:59:23.600365       1 cache.go:39] Caches are synced for autoregister controller
	I0601 10:59:23.600658       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0601 10:59:23.653039       1 shared_informer.go:204] Caches are synced for crd-autoregister 
	I0601 10:59:24.500177       1 controller.go:107] OpenAPI AggregationController: Processing item 
	I0601 10:59:24.500198       1 controller.go:130] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I0601 10:59:24.500206       1 controller.go:130] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0601 10:59:24.504915       1 storage_scheduling.go:139] created PriorityClass system-node-critical with value 2000001000
	I0601 10:59:24.507571       1 storage_scheduling.go:139] created PriorityClass system-cluster-critical with value 2000000000
	I0601 10:59:24.507599       1 storage_scheduling.go:148] all system priority classes are created successfully or already exist.
	I0601 10:59:25.260704       1 controller.go:606] quota admission added evaluator for: leases.coordination.k8s.io
	I0601 10:59:26.281264       1 controller.go:606] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0601 10:59:26.561277       1 controller.go:606] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	W0601 10:59:26.876565       1 lease.go:222] Resetting endpoints for master service "kubernetes" to [192.168.58.2]
	I0601 10:59:26.877208       1 controller.go:606] quota admission added evaluator for: endpoints
	I0601 10:59:27.764458       1 controller.go:606] quota admission added evaluator for: serviceaccounts
	I0601 10:59:28.362361       1 controller.go:606] quota admission added evaluator for: deployments.apps
	I0601 10:59:28.727470       1 controller.go:606] quota admission added evaluator for: daemonsets.apps
	I0601 10:59:44.218023       1 controller.go:606] quota admission added evaluator for: replicasets.apps
	I0601 10:59:44.232173       1 controller.go:606] quota admission added evaluator for: events.events.k8s.io
	I0601 10:59:44.620734       1 controller.go:606] quota admission added evaluator for: controllerrevisions.apps
	
	* 
	* ==> kube-controller-manager [f18885873e44ef000cea8b73305d4b972b24f41b3a821ebf6ed2fbb3c400745d] <==
	* W0601 10:59:44.510205       1 actual_state_of_world.go:506] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="old-k8s-version-20220601105850-6708" does not exist
	I0601 10:59:44.513080       1 shared_informer.go:204] Caches are synced for attach detach 
	I0601 10:59:44.515251       1 shared_informer.go:204] Caches are synced for taint 
	I0601 10:59:44.515323       1 taint_manager.go:186] Starting NoExecuteTaintManager
	I0601 10:59:44.515326       1 node_lifecycle_controller.go:1208] Initializing eviction metric for zone: 
	W0601 10:59:44.515439       1 node_lifecycle_controller.go:903] Missing timestamp for Node old-k8s-version-20220601105850-6708. Assuming now as a timestamp.
	I0601 10:59:44.515423       1 event.go:255] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"old-k8s-version-20220601105850-6708", UID:"9a70fc40-abc0-4b88-bdf7-4c4dea7658d1", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RegisteredNode' Node old-k8s-version-20220601105850-6708 event: Registered Node old-k8s-version-20220601105850-6708 in Controller
	I0601 10:59:44.515473       1 node_lifecycle_controller.go:1058] Controller detected that all Nodes are not-Ready. Entering master disruption mode.
	I0601 10:59:44.562663       1 shared_informer.go:204] Caches are synced for persistent volume 
	I0601 10:59:44.562672       1 shared_informer.go:204] Caches are synced for stateful set 
	I0601 10:59:44.566561       1 shared_informer.go:204] Caches are synced for node 
	I0601 10:59:44.566584       1 range_allocator.go:172] Starting range CIDR allocator
	I0601 10:59:44.566598       1 shared_informer.go:197] Waiting for caches to sync for cidrallocator
	I0601 10:59:44.578102       1 shared_informer.go:204] Caches are synced for TTL 
	I0601 10:59:44.616316       1 shared_informer.go:204] Caches are synced for daemon sets 
	I0601 10:59:44.631230       1 event.go:255] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"kube-system", Name:"kube-proxy", UID:"46c63a7a-da9c-4b21-b27e-3ab2cc1bf42c", APIVersion:"apps/v1", ResourceVersion:"209", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kube-proxy-9db28
	I0601 10:59:44.633700       1 event.go:255] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"kube-system", Name:"kindnet", UID:"aee4ae9e-2298-4d10-81af-933537f4ccd9", APIVersion:"apps/v1", ResourceVersion:"223", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kindnet-rvdm8
	I0601 10:59:44.666231       1 shared_informer.go:204] Caches are synced for garbage collector 
	I0601 10:59:44.666319       1 garbagecollector.go:139] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0601 10:59:44.667908       1 shared_informer.go:204] Caches are synced for cidrallocator 
	I0601 10:59:44.674142       1 range_allocator.go:359] Set node old-k8s-version-20220601105850-6708 PodCIDR to [10.244.0.0/24]
	I0601 10:59:44.709226       1 shared_informer.go:204] Caches are synced for garbage collector 
	I0601 10:59:44.718073       1 shared_informer.go:204] Caches are synced for resource quota 
	I0601 10:59:45.806836       1 shared_informer.go:197] Waiting for caches to sync for resource quota
	I0601 10:59:45.907072       1 shared_informer.go:204] Caches are synced for resource quota 
	
	* 
	* ==> kube-proxy [01651d3598805140172b9f0f86349cd8cad0f336647501ce25f9120bcb1f7dc3] <==
	* W0601 10:59:45.684675       1 server_others.go:329] Flag proxy-mode="" unknown, assuming iptables proxy
	I0601 10:59:45.696662       1 node.go:135] Successfully retrieved node IP: 192.168.58.2
	I0601 10:59:45.696711       1 server_others.go:149] Using iptables Proxier.
	I0601 10:59:45.697092       1 server.go:529] Version: v1.16.0
	I0601 10:59:45.698531       1 config.go:313] Starting service config controller
	I0601 10:59:45.698559       1 shared_informer.go:197] Waiting for caches to sync for service config
	I0601 10:59:45.698582       1 config.go:131] Starting endpoints config controller
	I0601 10:59:45.698600       1 shared_informer.go:197] Waiting for caches to sync for endpoints config
	I0601 10:59:45.798783       1 shared_informer.go:204] Caches are synced for endpoints config 
	I0601 10:59:45.799058       1 shared_informer.go:204] Caches are synced for service config 
	
	* 
	* ==> kube-scheduler [e4d08ecd5adee34f6ccfaeb042d497cedc44597ee436ef3a30c0c98e725c3582] <==
	* I0601 10:59:23.568522       1 deprecated_insecure_serving.go:51] Serving healthz insecurely on [::]:10251
	I0601 10:59:23.569198       1 secure_serving.go:123] Serving securely on 127.0.0.1:10259
	E0601 10:59:23.658434       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0601 10:59:23.660485       1 reflector.go:123] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:236: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0601 10:59:23.661016       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0601 10:59:23.662119       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0601 10:59:23.665509       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0601 10:59:23.665685       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0601 10:59:23.665696       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0601 10:59:23.665786       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0601 10:59:23.665877       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0601 10:59:23.666262       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0601 10:59:23.667640       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0601 10:59:24.659538       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0601 10:59:24.661616       1 reflector.go:123] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:236: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0601 10:59:24.662868       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0601 10:59:24.664538       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0601 10:59:24.666434       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0601 10:59:24.667461       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0601 10:59:24.668599       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0601 10:59:24.669697       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0601 10:59:24.670863       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0601 10:59:24.672730       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0601 10:59:24.673763       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0601 10:59:45.971438       1 factory.go:585] pod is already present in the activeQ
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Wed 2022-06-01 10:59:01 UTC, end at Wed 2022-06-01 11:03:46 UTC. --
	Jun 01 11:01:43 old-k8s-version-20220601105850-6708 kubelet[914]: E0601 11:01:43.707805     914 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Jun 01 11:01:48 old-k8s-version-20220601105850-6708 kubelet[914]: E0601 11:01:48.708485     914 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Jun 01 11:01:53 old-k8s-version-20220601105850-6708 kubelet[914]: E0601 11:01:53.709232     914 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Jun 01 11:01:58 old-k8s-version-20220601105850-6708 kubelet[914]: E0601 11:01:58.709956     914 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Jun 01 11:02:03 old-k8s-version-20220601105850-6708 kubelet[914]: E0601 11:02:03.710617     914 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Jun 01 11:02:08 old-k8s-version-20220601105850-6708 kubelet[914]: E0601 11:02:08.711279     914 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Jun 01 11:02:13 old-k8s-version-20220601105850-6708 kubelet[914]: E0601 11:02:13.711963     914 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Jun 01 11:02:18 old-k8s-version-20220601105850-6708 kubelet[914]: E0601 11:02:18.712709     914 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Jun 01 11:02:23 old-k8s-version-20220601105850-6708 kubelet[914]: E0601 11:02:23.713349     914 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Jun 01 11:02:28 old-k8s-version-20220601105850-6708 kubelet[914]: E0601 11:02:28.714087     914 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Jun 01 11:02:33 old-k8s-version-20220601105850-6708 kubelet[914]: E0601 11:02:33.714684     914 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Jun 01 11:02:38 old-k8s-version-20220601105850-6708 kubelet[914]: E0601 11:02:38.715398     914 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Jun 01 11:02:43 old-k8s-version-20220601105850-6708 kubelet[914]: E0601 11:02:43.715952     914 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Jun 01 11:02:48 old-k8s-version-20220601105850-6708 kubelet[914]: E0601 11:02:48.716660     914 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Jun 01 11:02:53 old-k8s-version-20220601105850-6708 kubelet[914]: E0601 11:02:53.717236     914 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Jun 01 11:02:58 old-k8s-version-20220601105850-6708 kubelet[914]: E0601 11:02:58.717996     914 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Jun 01 11:03:03 old-k8s-version-20220601105850-6708 kubelet[914]: E0601 11:03:03.718528     914 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Jun 01 11:03:08 old-k8s-version-20220601105850-6708 kubelet[914]: E0601 11:03:08.719274     914 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Jun 01 11:03:13 old-k8s-version-20220601105850-6708 kubelet[914]: E0601 11:03:13.720128     914 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Jun 01 11:03:18 old-k8s-version-20220601105850-6708 kubelet[914]: E0601 11:03:18.720832     914 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Jun 01 11:03:23 old-k8s-version-20220601105850-6708 kubelet[914]: E0601 11:03:23.721647     914 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Jun 01 11:03:28 old-k8s-version-20220601105850-6708 kubelet[914]: E0601 11:03:28.722525     914 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Jun 01 11:03:33 old-k8s-version-20220601105850-6708 kubelet[914]: E0601 11:03:33.723285     914 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Jun 01 11:03:38 old-k8s-version-20220601105850-6708 kubelet[914]: E0601 11:03:38.724184     914 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Jun 01 11:03:43 old-k8s-version-20220601105850-6708 kubelet[914]: E0601 11:03:43.724924     914 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-20220601105850-6708 -n old-k8s-version-20220601105850-6708
helpers_test.go:261: (dbg) Run:  kubectl --context old-k8s-version-20220601105850-6708 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:270: non-running pods: coredns-5644d7b6d9-5z28m storage-provisioner
helpers_test.go:272: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/FirstStart]: describe non-running pods <======
helpers_test.go:275: (dbg) Run:  kubectl --context old-k8s-version-20220601105850-6708 describe pod coredns-5644d7b6d9-5z28m storage-provisioner
helpers_test.go:275: (dbg) Non-zero exit: kubectl --context old-k8s-version-20220601105850-6708 describe pod coredns-5644d7b6d9-5z28m storage-provisioner: exit status 1 (52.522494ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-5644d7b6d9-5z28m" not found
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:277: kubectl --context old-k8s-version-20220601105850-6708 describe pod coredns-5644d7b6d9-5z28m storage-provisioner: exit status 1
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (296.62s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (900.88s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:138: (dbg) Run:  kubectl --context calico-20220601104839-6708 replace --force -f testdata/netcat-deployment.yaml
net_test.go:152: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:342: "netcat-668db85669-zcwxg" [a9070a6d-1639-49c2-b05b-948d4c20da42] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])

                                                
                                                
=== CONT  TestNetworkPlugins/group/calico/NetCatPod
net_test.go:152: ***** TestNetworkPlugins/group/calico/NetCatPod: pod "app=netcat" failed to start within 15m0s: timed out waiting for the condition ****
net_test.go:152: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p calico-20220601104839-6708 -n calico-20220601104839-6708
net_test.go:152: TestNetworkPlugins/group/calico/NetCatPod: showing logs for failed pods as of 2022-06-01 11:14:15.382379178 +0000 UTC m=+3256.381999413
net_test.go:152: (dbg) Run:  kubectl --context calico-20220601104839-6708 describe po netcat-668db85669-zcwxg -n default
net_test.go:152: (dbg) kubectl --context calico-20220601104839-6708 describe po netcat-668db85669-zcwxg -n default:
Name:           netcat-668db85669-zcwxg
Namespace:      default
Priority:       0
Node:           calico-20220601104839-6708/192.168.67.2
Start Time:     Wed, 01 Jun 2022 10:59:14 +0000
Labels:         app=netcat
pod-template-hash=668db85669
Annotations:    <none>
Status:         Pending
IP:             
IPs:            <none>
Controlled By:  ReplicaSet/netcat-668db85669
Containers:
dnsutils:
Container ID:  
Image:         k8s.gcr.io/e2e-test-images/agnhost:2.32
Image ID:      
Port:          <none>
Host Port:     <none>
Command:
/bin/sh
-c
while true; do echo hello | nc -l -p 8080; done
State:          Waiting
Reason:       ContainerCreating
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-kfh4m (ro)
Conditions:
Type              Status
Initialized       True 
Ready             False 
ContainersReady   False 
PodScheduled      True 
Volumes:
kube-api-access-kfh4m:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
ConfigMapOptional:       <nil>
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason                  Age                  From               Message
----     ------                  ----                 ----               -------
Normal   Scheduled               15m                  default-scheduler  Successfully assigned default/netcat-668db85669-zcwxg to calico-20220601104839-6708
Warning  FailedCreatePodSandBox  14m                  kubelet            Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "99b292dc5eb5e091e07b188415c7cd52a74516f3a2332f9d3548a8c717bea378": plugin type="calico" failed (add): error getting ClusterInformation: Get "https://[10.96.0.1]:443/apis/crd.projectcalico.org/v1/clusterinformations/default": dial tcp 10.96.0.1:443: i/o timeout
Warning  FailedCreatePodSandBox  12m                  kubelet            Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "bf0ae33fddc2cb3bdeee4ad0a28275c141f75b81db82bc0a04c853f247db196e": plugin type="calico" failed (add): error getting ClusterInformation: Get "https://[10.96.0.1]:443/apis/crd.projectcalico.org/v1/clusterinformations/default": dial tcp 10.96.0.1:443: i/o timeout
Warning  FailedCreatePodSandBox  11m                  kubelet            Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "250cdddff0f9af35bbd3476e17de6c29fa7b046ec48b1ce2c191544e273b04b2": plugin type="calico" failed (add): error getting ClusterInformation: Get "https://[10.96.0.1]:443/apis/crd.projectcalico.org/v1/clusterinformations/default": dial tcp 10.96.0.1:443: i/o timeout
Warning  FailedCreatePodSandBox  10m                  kubelet            Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "54f790d375175204f1ba954c1fbb211572992a39ce4cc37f79743a393597dac9": plugin type="calico" failed (add): error getting ClusterInformation: Get "https://[10.96.0.1]:443/apis/crd.projectcalico.org/v1/clusterinformations/default": dial tcp 10.96.0.1:443: i/o timeout
Warning  FailedCreatePodSandBox  9m4s                 kubelet            Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "1865fe194a1812b6815aa82d0dc40d9832c56a78ec34219e506b28430d47fca7": plugin type="calico" failed (add): error getting ClusterInformation: Get "https://[10.96.0.1]:443/apis/crd.projectcalico.org/v1/clusterinformations/default": dial tcp 10.96.0.1:443: i/o timeout
Warning  FailedCreatePodSandBox  7m52s                kubelet            Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "5097ebac7c55f3cd3468a1a31164636fdd3f88095b16d201d807b6c9fd7f4324": plugin type="calico" failed (add): error getting ClusterInformation: Get "https://[10.96.0.1]:443/apis/crd.projectcalico.org/v1/clusterinformations/default": dial tcp 10.96.0.1:443: i/o timeout
Warning  FailedCreatePodSandBox  6m37s                kubelet            Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "89fda9925a5b1e543467e5921a0876089a560e95878fd542a61739725f7e0cd6": plugin type="calico" failed (add): error getting ClusterInformation: Get "https://[10.96.0.1]:443/apis/crd.projectcalico.org/v1/clusterinformations/default": dial tcp 10.96.0.1:443: i/o timeout
Warning  FailedCreatePodSandBox  5m24s                kubelet            Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "5259d727b5dac5c0ebaaaf53d879a10990b7de9c425f71c49de67c0a688ab60a": plugin type="calico" failed (add): error getting ClusterInformation: Get "https://[10.96.0.1]:443/apis/crd.projectcalico.org/v1/clusterinformations/default": dial tcp 10.96.0.1:443: i/o timeout
Warning  FailedCreatePodSandBox  4m9s                 kubelet            Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "99c0519ee53a79248573cc5ece952d180ae3c3f0a921b055a63efd0f21f798a2": plugin type="calico" failed (add): error getting ClusterInformation: Get "https://[10.96.0.1]:443/apis/crd.projectcalico.org/v1/clusterinformations/default": dial tcp 10.96.0.1:443: i/o timeout
Warning  FailedCreatePodSandBox  27s (x3 over 2m57s)  kubelet            (combined from similar events): Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "c4a06385fc0ad0f7d77a2c296a77434caecc5339a727e197dc7fdcebc832b356": plugin type="calico" failed (add): error getting ClusterInformation: Get "https://[10.96.0.1]:443/apis/crd.projectcalico.org/v1/clusterinformations/default": dial tcp 10.96.0.1:443: i/o timeout
net_test.go:152: (dbg) Run:  kubectl --context calico-20220601104839-6708 logs netcat-668db85669-zcwxg -n default
net_test.go:152: (dbg) Non-zero exit: kubectl --context calico-20220601104839-6708 logs netcat-668db85669-zcwxg -n default: exit status 1 (67.221984ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "dnsutils" in pod "netcat-668db85669-zcwxg" is waiting to start: ContainerCreating

                                                
                                                
** /stderr **
net_test.go:152: kubectl --context calico-20220601104839-6708 logs netcat-668db85669-zcwxg -n default: exit status 1
net_test.go:153: failed waiting for netcat pod: app=netcat within 15m0s: timed out waiting for the condition
--- FAIL: TestNetworkPlugins/group/calico/NetCatPod (900.88s)
E0601 11:16:08.062553    6708 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/no-preload-20220601105939-6708/client.crt: no such file or directory

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (283.31s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:188: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-20220601110327-6708 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.23.6
E0601 11:03:34.904758    6708 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/bridge-20220601104837-6708/client.crt: no such file or directory
E0601 11:03:34.910003    6708 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/bridge-20220601104837-6708/client.crt: no such file or directory
E0601 11:03:34.920223    6708 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/bridge-20220601104837-6708/client.crt: no such file or directory
E0601 11:03:34.940482    6708 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/bridge-20220601104837-6708/client.crt: no such file or directory
E0601 11:03:34.980804    6708 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/bridge-20220601104837-6708/client.crt: no such file or directory
E0601 11:03:35.061443    6708 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/bridge-20220601104837-6708/client.crt: no such file or directory
E0601 11:03:35.221877    6708 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/bridge-20220601104837-6708/client.crt: no such file or directory
E0601 11:03:35.542406    6708 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/bridge-20220601104837-6708/client.crt: no such file or directory
E0601 11:03:35.614597    6708 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/enable-default-cni-20220601104837-6708/client.crt: no such file or directory
E0601 11:03:36.182991    6708 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/bridge-20220601104837-6708/client.crt: no such file or directory
E0601 11:03:37.463977    6708 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/bridge-20220601104837-6708/client.crt: no such file or directory
E0601 11:03:40.025123    6708 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/bridge-20220601104837-6708/client.crt: no such file or directory
E0601 11:03:45.145649    6708 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/bridge-20220601104837-6708/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:188: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p embed-certs-20220601110327-6708 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.23.6: exit status 80 (4m41.442033388s)

                                                
                                                
-- stdout --
	* [embed-certs-20220601110327-6708] minikube v1.26.0-beta.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=14079
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	* Using the docker driver based on user configuration
	* Using Docker driver with the root privilege
	* Starting control plane node embed-certs-20220601110327-6708 in cluster embed-certs-20220601110327-6708
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	* Preparing Kubernetes v1.23.6 on containerd 1.6.4 ...
	  - kubelet.cni-conf-dir=/etc/cni/net.mk
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: default-storageclass, storage-provisioner
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0601 11:03:27.299666  232046 out.go:296] Setting OutFile to fd 1 ...
	I0601 11:03:27.299777  232046 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0601 11:03:27.299787  232046 out.go:309] Setting ErrFile to fd 2...
	I0601 11:03:27.299797  232046 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0601 11:03:27.299950  232046 root.go:322] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/bin
	I0601 11:03:27.300231  232046 out.go:303] Setting JSON to false
	I0601 11:03:27.301890  232046 start.go:115] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":2762,"bootTime":1654078646,"procs":770,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.13.0-1027-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0601 11:03:27.301963  232046 start.go:125] virtualization: kvm guest
	I0601 11:03:27.304661  232046 out.go:177] * [embed-certs-20220601110327-6708] minikube v1.26.0-beta.1 on Ubuntu 20.04 (kvm/amd64)
	I0601 11:03:27.306100  232046 out.go:177]   - MINIKUBE_LOCATION=14079
	I0601 11:03:27.307438  232046 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0601 11:03:27.306099  232046 notify.go:193] Checking for updates...
	I0601 11:03:27.308848  232046 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig
	I0601 11:03:27.310220  232046 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube
	I0601 11:03:27.311532  232046 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0601 11:03:27.313329  232046 config.go:178] Loaded profile config "calico-20220601104839-6708": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.6
	I0601 11:03:27.313514  232046 config.go:178] Loaded profile config "no-preload-20220601105939-6708": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.6
	I0601 11:03:27.313633  232046 config.go:178] Loaded profile config "old-k8s-version-20220601105850-6708": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.16.0
	I0601 11:03:27.313685  232046 driver.go:358] Setting default libvirt URI to qemu:///system
	I0601 11:03:27.354261  232046 docker.go:137] docker version: linux-20.10.16
	I0601 11:03:27.354368  232046 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0601 11:03:27.480240  232046 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:76 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:51 OomKillDisable:true NGoroutines:49 SystemTime:2022-06-01 11:03:27.386350535 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1027-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662795776 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:20.10.16 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0601 11:03:27.480347  232046 docker.go:254] overlay module found
	I0601 11:03:27.482738  232046 out.go:177] * Using the docker driver based on user configuration
	I0601 11:03:27.484175  232046 start.go:284] selected driver: docker
	I0601 11:03:27.484191  232046 start.go:806] validating driver "docker" against <nil>
	I0601 11:03:27.484208  232046 start.go:817] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0601 11:03:27.485098  232046 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0601 11:03:27.589407  232046 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:76 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:51 OomKillDisable:true NGoroutines:49 SystemTime:2022-06-01 11:03:27.51589514 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1027-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662795776 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:20.10.16 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Client
Info:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0601 11:03:27.589532  232046 start_flags.go:292] no existing cluster config was found, will generate one from the flags 
	I0601 11:03:27.589672  232046 start_flags.go:847] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0601 11:03:27.591603  232046 out.go:177] * Using Docker driver with the root privilege
	I0601 11:03:27.592928  232046 cni.go:95] Creating CNI manager for ""
	I0601 11:03:27.592942  232046 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0601 11:03:27.592957  232046 cni.go:225] auto-setting extra-config to "kubelet.cni-conf-dir=/etc/cni/net.mk"
	I0601 11:03:27.592967  232046 cni.go:230] extra-config set to "kubelet.cni-conf-dir=/etc/cni/net.mk"
	I0601 11:03:27.592974  232046 start_flags.go:301] Found "CNI" CNI - setting NetworkPlugin=cni
	I0601 11:03:27.592999  232046 start_flags.go:306] config:
	{Name:embed-certs-20220601110327-6708 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:embed-certs-20220601110327-6708 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:clus
ter.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0601 11:03:27.594436  232046 out.go:177] * Starting control plane node embed-certs-20220601110327-6708 in cluster embed-certs-20220601110327-6708
	I0601 11:03:27.595727  232046 cache.go:120] Beginning downloading kic base image for docker with containerd
	I0601 11:03:27.597093  232046 out.go:177] * Pulling base image ...
	I0601 11:03:27.598435  232046 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime containerd
	I0601 11:03:27.598463  232046 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a in local docker daemon
	I0601 11:03:27.598480  232046 preload.go:148] Found local preload: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.6-containerd-overlay2-amd64.tar.lz4
	I0601 11:03:27.598602  232046 cache.go:57] Caching tarball of preloaded images
	I0601 11:03:27.598818  232046 preload.go:174] Found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.6-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0601 11:03:27.598843  232046 cache.go:60] Finished verifying existence of preloaded tar for  v1.23.6 on containerd
	I0601 11:03:27.598939  232046 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/embed-certs-20220601110327-6708/config.json ...
	I0601 11:03:27.598960  232046 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/embed-certs-20220601110327-6708/config.json: {Name:mk24cb3999beb25f5865c696fbad7fc73716c1d0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0601 11:03:27.646499  232046 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a in local docker daemon, skipping pull
	I0601 11:03:27.646524  232046 cache.go:141] gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a exists in daemon, skipping load
	I0601 11:03:27.646533  232046 cache.go:206] Successfully downloaded all kic artifacts
	I0601 11:03:27.646570  232046 start.go:352] acquiring machines lock for embed-certs-20220601110327-6708: {Name:mk2bc8f54b3ac1967b6e5e724f1be8808370dc1a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0601 11:03:27.646698  232046 start.go:356] acquired machines lock for "embed-certs-20220601110327-6708" in 107.872µs
	I0601 11:03:27.646732  232046 start.go:91] Provisioning new machine with config: &{Name:embed-certs-20220601110327-6708 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:embed-certs-20220601110327-6708 Namespace
:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144
MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false} &{Name: IP: Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0601 11:03:27.646835  232046 start.go:131] createHost starting for "" (driver="docker")
	I0601 11:03:27.649304  232046 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0601 11:03:27.649537  232046 start.go:165] libmachine.API.Create for "embed-certs-20220601110327-6708" (driver="docker")
	I0601 11:03:27.649568  232046 client.go:168] LocalClient.Create starting
	I0601 11:03:27.649623  232046 main.go:134] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca.pem
	I0601 11:03:27.649654  232046 main.go:134] libmachine: Decoding PEM data...
	I0601 11:03:27.649673  232046 main.go:134] libmachine: Parsing certificate...
	I0601 11:03:27.649724  232046 main.go:134] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/cert.pem
	I0601 11:03:27.649742  232046 main.go:134] libmachine: Decoding PEM data...
	I0601 11:03:27.649755  232046 main.go:134] libmachine: Parsing certificate...
	I0601 11:03:27.650032  232046 cli_runner.go:164] Run: docker network inspect embed-certs-20220601110327-6708 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0601 11:03:27.681068  232046 cli_runner.go:211] docker network inspect embed-certs-20220601110327-6708 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0601 11:03:27.681136  232046 network_create.go:272] running [docker network inspect embed-certs-20220601110327-6708] to gather additional debugging logs...
	I0601 11:03:27.681161  232046 cli_runner.go:164] Run: docker network inspect embed-certs-20220601110327-6708
	W0601 11:03:27.711044  232046 cli_runner.go:211] docker network inspect embed-certs-20220601110327-6708 returned with exit code 1
	I0601 11:03:27.711071  232046 network_create.go:275] error running [docker network inspect embed-certs-20220601110327-6708]: docker network inspect embed-certs-20220601110327-6708: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: embed-certs-20220601110327-6708
	I0601 11:03:27.711087  232046 network_create.go:277] output of [docker network inspect embed-certs-20220601110327-6708]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: embed-certs-20220601110327-6708
	
	** /stderr **
	I0601 11:03:27.711128  232046 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0601 11:03:27.741548  232046 network.go:240] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName:br-e3f0b201da39 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:75:ce:92:ab}}
	I0601 11:03:27.742201  232046 network.go:240] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName:br-99443bab5d3f IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:02:42:22:72:77:e2}}
	I0601 11:03:27.742658  232046 network.go:240] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName:br-787fac1877c0 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:02:42:00:92:62:57}}
	I0601 11:03:27.743303  232046 network.go:288] reserving subnet 192.168.76.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.76.0:0xc0005248e8] misses:0}
	I0601 11:03:27.743334  232046 network.go:235] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0601 11:03:27.743348  232046 network_create.go:115] attempt to create docker network embed-certs-20220601110327-6708 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I0601 11:03:27.743397  232046 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true embed-certs-20220601110327-6708
	I0601 11:03:27.808157  232046 network_create.go:99] docker network embed-certs-20220601110327-6708 192.168.76.0/24 created
	I0601 11:03:27.808191  232046 kic.go:106] calculated static IP "192.168.76.2" for the "embed-certs-20220601110327-6708" container
	I0601 11:03:27.808249  232046 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0601 11:03:27.842915  232046 cli_runner.go:164] Run: docker volume create embed-certs-20220601110327-6708 --label name.minikube.sigs.k8s.io=embed-certs-20220601110327-6708 --label created_by.minikube.sigs.k8s.io=true
	I0601 11:03:27.874391  232046 oci.go:103] Successfully created a docker volume embed-certs-20220601110327-6708
	I0601 11:03:27.874472  232046 cli_runner.go:164] Run: docker run --rm --name embed-certs-20220601110327-6708-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-20220601110327-6708 --entrypoint /usr/bin/test -v embed-certs-20220601110327-6708:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a -d /var/lib
	I0601 11:03:28.439714  232046 oci.go:107] Successfully prepared a docker volume embed-certs-20220601110327-6708
	I0601 11:03:28.439786  232046 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime containerd
	I0601 11:03:28.439813  232046 kic.go:179] Starting extracting preloaded images to volume ...
	I0601 11:03:28.439924  232046 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.6-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v embed-certs-20220601110327-6708:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a -I lz4 -xf /preloaded.tar -C /extractDir
	I0601 11:03:35.969715  232046 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.6-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v embed-certs-20220601110327-6708:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a -I lz4 -xf /preloaded.tar -C /extractDir: (7.529712266s)
	I0601 11:03:35.969747  232046 kic.go:188] duration metric: took 7.529932 seconds to extract preloaded images to volume
	W0601 11:03:35.969882  232046 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0601 11:03:35.969977  232046 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0601 11:03:36.072639  232046 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname embed-certs-20220601110327-6708 --name embed-certs-20220601110327-6708 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-20220601110327-6708 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=embed-certs-20220601110327-6708 --network embed-certs-20220601110327-6708 --ip 192.168.76.2 --volume embed-certs-20220601110327-6708:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a
	I0601 11:03:36.485272  232046 cli_runner.go:164] Run: docker container inspect embed-certs-20220601110327-6708 --format={{.State.Running}}
	I0601 11:03:36.521985  232046 cli_runner.go:164] Run: docker container inspect embed-certs-20220601110327-6708 --format={{.State.Status}}
	I0601 11:03:36.556366  232046 cli_runner.go:164] Run: docker exec embed-certs-20220601110327-6708 stat /var/lib/dpkg/alternatives/iptables
	I0601 11:03:36.619414  232046 oci.go:247] the created container "embed-certs-20220601110327-6708" has a running status.
	I0601 11:03:36.619447  232046 kic.go:210] Creating ssh key for kic: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/embed-certs-20220601110327-6708/id_rsa...
	I0601 11:03:36.967549  232046 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/embed-certs-20220601110327-6708/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0601 11:03:37.050742  232046 cli_runner.go:164] Run: docker container inspect embed-certs-20220601110327-6708 --format={{.State.Status}}
	I0601 11:03:37.083967  232046 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0601 11:03:37.083990  232046 kic_runner.go:114] Args: [docker exec --privileged embed-certs-20220601110327-6708 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0601 11:03:37.165827  232046 cli_runner.go:164] Run: docker container inspect embed-certs-20220601110327-6708 --format={{.State.Status}}
	I0601 11:03:37.197294  232046 machine.go:88] provisioning docker machine ...
	I0601 11:03:37.197354  232046 ubuntu.go:169] provisioning hostname "embed-certs-20220601110327-6708"
	I0601 11:03:37.197415  232046 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220601110327-6708
	I0601 11:03:37.227438  232046 main.go:134] libmachine: Using SSH client type: native
	I0601 11:03:37.227631  232046 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7da240] 0x7dd2a0 <nil>  [] 0s} 127.0.0.1 49412 <nil> <nil>}
	I0601 11:03:37.227656  232046 main.go:134] libmachine: About to run SSH command:
	sudo hostname embed-certs-20220601110327-6708 && echo "embed-certs-20220601110327-6708" | sudo tee /etc/hostname
	I0601 11:03:37.347961  232046 main.go:134] libmachine: SSH cmd err, output: <nil>: embed-certs-20220601110327-6708
	
	I0601 11:03:37.348021  232046 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220601110327-6708
	I0601 11:03:37.379387  232046 main.go:134] libmachine: Using SSH client type: native
	I0601 11:03:37.379521  232046 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7da240] 0x7dd2a0 <nil>  [] 0s} 127.0.0.1 49412 <nil> <nil>}
	I0601 11:03:37.379565  232046 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-20220601110327-6708' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-20220601110327-6708/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-20220601110327-6708' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0601 11:03:37.499472  232046 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0601 11:03:37.499496  232046 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/c
erts/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube}
	I0601 11:03:37.499515  232046 ubuntu.go:177] setting up certificates
	I0601 11:03:37.499523  232046 provision.go:83] configureAuth start
	I0601 11:03:37.499566  232046 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-20220601110327-6708
	I0601 11:03:37.531509  232046 provision.go:138] copyHostCerts
	I0601 11:03:37.531564  232046 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cert.pem, removing ...
	I0601 11:03:37.531571  232046 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cert.pem
	I0601 11:03:37.531630  232046 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cert.pem (1123 bytes)
	I0601 11:03:37.531696  232046 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/key.pem, removing ...
	I0601 11:03:37.531702  232046 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/key.pem
	I0601 11:03:37.531724  232046 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/key.pem (1679 bytes)
	I0601 11:03:37.531774  232046 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.pem, removing ...
	I0601 11:03:37.531783  232046 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.pem
	I0601 11:03:37.531801  232046 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.pem (1082 bytes)
	I0601 11:03:37.531841  232046 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca-key.pem org=jenkins.embed-certs-20220601110327-6708 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube embed-certs-20220601110327-6708]
	I0601 11:03:37.611352  232046 provision.go:172] copyRemoteCerts
	I0601 11:03:37.611407  232046 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0601 11:03:37.611439  232046 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220601110327-6708
	I0601 11:03:37.642849  232046 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49412 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/embed-certs-20220601110327-6708/id_rsa Username:docker}
	I0601 11:03:37.727077  232046 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/server.pem --> /etc/docker/server.pem (1265 bytes)
	I0601 11:03:37.743826  232046 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0601 11:03:37.760239  232046 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0601 11:03:37.776574  232046 provision.go:86] duration metric: configureAuth took 277.039388ms
	I0601 11:03:37.776599  232046 ubuntu.go:193] setting minikube options for container-runtime
	I0601 11:03:37.776775  232046 config.go:178] Loaded profile config "embed-certs-20220601110327-6708": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.6
	I0601 11:03:37.776793  232046 machine.go:91] provisioned docker machine in 579.472653ms
	I0601 11:03:37.776801  232046 client.go:171] LocalClient.Create took 10.127224112s
	I0601 11:03:37.776825  232046 start.go:173] duration metric: libmachine.API.Create for "embed-certs-20220601110327-6708" took 10.127283476s
	I0601 11:03:37.776838  232046 start.go:306] post-start starting for "embed-certs-20220601110327-6708" (driver="docker")
	I0601 11:03:37.776844  232046 start.go:316] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0601 11:03:37.776882  232046 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0601 11:03:37.776915  232046 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220601110327-6708
	I0601 11:03:37.808696  232046 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49412 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/embed-certs-20220601110327-6708/id_rsa Username:docker}
	I0601 11:03:37.895158  232046 ssh_runner.go:195] Run: cat /etc/os-release
	I0601 11:03:37.897676  232046 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0601 11:03:37.897697  232046 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0601 11:03:37.897712  232046 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0601 11:03:37.897720  232046 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0601 11:03:37.897730  232046 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/addons for local assets ...
	I0601 11:03:37.897783  232046 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files for local assets ...
	I0601 11:03:37.897857  232046 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files/etc/ssl/certs/67082.pem -> 67082.pem in /etc/ssl/certs
	I0601 11:03:37.897957  232046 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0601 11:03:37.904368  232046 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files/etc/ssl/certs/67082.pem --> /etc/ssl/certs/67082.pem (1708 bytes)
	I0601 11:03:37.921323  232046 start.go:309] post-start completed in 144.475564ms
	I0601 11:03:37.921630  232046 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-20220601110327-6708
	I0601 11:03:37.953932  232046 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/embed-certs-20220601110327-6708/config.json ...
	I0601 11:03:37.954186  232046 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0601 11:03:37.954237  232046 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220601110327-6708
	I0601 11:03:37.984421  232046 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49412 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/embed-certs-20220601110327-6708/id_rsa Username:docker}
	I0601 11:03:38.068308  232046 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0601 11:03:38.072096  232046 start.go:134] duration metric: createHost completed in 10.42524994s
	I0601 11:03:38.072124  232046 start.go:81] releasing machines lock for "embed-certs-20220601110327-6708", held for 10.425412714s
	I0601 11:03:38.072205  232046 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-20220601110327-6708
	I0601 11:03:38.103759  232046 ssh_runner.go:195] Run: systemctl --version
	I0601 11:03:38.103804  232046 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220601110327-6708
	I0601 11:03:38.103904  232046 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0601 11:03:38.103971  232046 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220601110327-6708
	I0601 11:03:38.139307  232046 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49412 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/embed-certs-20220601110327-6708/id_rsa Username:docker}
	I0601 11:03:38.140299  232046 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49412 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/embed-certs-20220601110327-6708/id_rsa Username:docker}
	I0601 11:03:38.224785  232046 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0601 11:03:38.249773  232046 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0601 11:03:38.258735  232046 docker.go:187] disabling docker service ...
	I0601 11:03:38.258782  232046 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0601 11:03:38.274556  232046 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0601 11:03:38.283463  232046 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0601 11:03:38.363548  232046 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0601 11:03:38.438994  232046 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0601 11:03:38.447915  232046 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0601 11:03:38.460136  232046 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*sandbox_image = .*$|sandbox_image = "k8s.gcr.io/pause:3.6"|' -i /etc/containerd/config.toml"
	I0601 11:03:38.467936  232046 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*restrict_oom_score_adj = .*$|restrict_oom_score_adj = false|' -i /etc/containerd/config.toml"
	I0601 11:03:38.475365  232046 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*SystemdCgroup = .*$|SystemdCgroup = false|' -i /etc/containerd/config.toml"
	I0601 11:03:38.483074  232046 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*conf_dir = .*$|conf_dir = "/etc/cni/net.mk"|' -i /etc/containerd/config.toml"
	I0601 11:03:38.490614  232046 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^# imports|imports = ["/etc/containerd/containerd.conf.d/02-containerd.conf"]|' -i /etc/containerd/config.toml"
	I0601 11:03:38.498210  232046 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc/containerd/containerd.conf.d && printf %s "dmVyc2lvbiA9IDIK" | base64 -d | sudo tee /etc/containerd/containerd.conf.d/02-containerd.conf"
	I0601 11:03:38.510391  232046 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0601 11:03:38.517147  232046 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0601 11:03:38.523067  232046 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0601 11:03:38.594580  232046 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0601 11:03:38.672884  232046 start.go:447] Will wait 60s for socket path /run/containerd/containerd.sock
	I0601 11:03:38.672946  232046 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0601 11:03:38.676772  232046 start.go:468] Will wait 60s for crictl version
	I0601 11:03:38.676824  232046 ssh_runner.go:195] Run: sudo crictl version
	I0601 11:03:38.703410  232046 start.go:477] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.6.4
	RuntimeApiVersion:  v1alpha2
	I0601 11:03:38.703469  232046 ssh_runner.go:195] Run: containerd --version
	I0601 11:03:38.730051  232046 ssh_runner.go:195] Run: containerd --version
	I0601 11:03:38.759660  232046 out.go:177] * Preparing Kubernetes v1.23.6 on containerd 1.6.4 ...
	I0601 11:03:38.761313  232046 cli_runner.go:164] Run: docker network inspect embed-certs-20220601110327-6708 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0601 11:03:38.792776  232046 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I0601 11:03:38.796019  232046 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0601 11:03:38.807138  232046 out.go:177]   - kubelet.cni-conf-dir=/etc/cni/net.mk
	I0601 11:03:38.808465  232046 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime containerd
	I0601 11:03:38.808518  232046 ssh_runner.go:195] Run: sudo crictl images --output json
	I0601 11:03:38.831179  232046 containerd.go:547] all images are preloaded for containerd runtime.
	I0601 11:03:38.831201  232046 containerd.go:461] Images already preloaded, skipping extraction
	I0601 11:03:38.831236  232046 ssh_runner.go:195] Run: sudo crictl images --output json
	I0601 11:03:38.854438  232046 containerd.go:547] all images are preloaded for containerd runtime.
	I0601 11:03:38.854456  232046 cache_images.go:84] Images are preloaded, skipping loading
	I0601 11:03:38.854492  232046 ssh_runner.go:195] Run: sudo crictl info
	I0601 11:03:38.877089  232046 cni.go:95] Creating CNI manager for ""
	I0601 11:03:38.877113  232046 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0601 11:03:38.877130  232046 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0601 11:03:38.877147  232046 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.23.6 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-20220601110327-6708 NodeName:embed-certs-20220601110327-6708 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientC
AFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0601 11:03:38.877304  232046 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "embed-certs-20220601110327-6708"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.23.6
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0601 11:03:38.877407  232046 kubeadm.go:961] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.23.6/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cni-conf-dir=/etc/cni/net.mk --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=embed-certs-20220601110327-6708 --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.76.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.23.6 ClusterName:embed-certs-20220601110327-6708 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0601 11:03:38.877460  232046 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.23.6
	I0601 11:03:38.884305  232046 binaries.go:44] Found k8s binaries, skipping transfer
	I0601 11:03:38.884366  232046 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0601 11:03:38.890762  232046 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (576 bytes)
	I0601 11:03:38.902732  232046 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0601 11:03:38.914836  232046 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2060 bytes)
	I0601 11:03:38.926843  232046 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0601 11:03:38.929504  232046 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0601 11:03:38.938158  232046 certs.go:54] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/embed-certs-20220601110327-6708 for IP: 192.168.76.2
	I0601 11:03:38.938248  232046 certs.go:182] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.key
	I0601 11:03:38.938288  232046 certs.go:182] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/proxy-client-ca.key
	I0601 11:03:38.938340  232046 certs.go:302] generating minikube-user signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/embed-certs-20220601110327-6708/client.key
	I0601 11:03:38.938356  232046 crypto.go:68] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/embed-certs-20220601110327-6708/client.crt with IP's: []
	I0601 11:03:39.037676  232046 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/embed-certs-20220601110327-6708/client.crt ...
	I0601 11:03:39.037700  232046 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/embed-certs-20220601110327-6708/client.crt: {Name:mkb482ef7c144c6701a53669ca934f9776ff7e1a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0601 11:03:39.037877  232046 crypto.go:164] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/embed-certs-20220601110327-6708/client.key ...
	I0601 11:03:39.037890  232046 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/embed-certs-20220601110327-6708/client.key: {Name:mk86cf41124d17fab06576d0d1084ed026783a31 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0601 11:03:39.037971  232046 certs.go:302] generating minikube signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/embed-certs-20220601110327-6708/apiserver.key.31bdca25
	I0601 11:03:39.037986  232046 crypto.go:68] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/embed-certs-20220601110327-6708/apiserver.crt.31bdca25 with IP's: [192.168.76.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0601 11:03:39.203094  232046 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/embed-certs-20220601110327-6708/apiserver.crt.31bdca25 ...
	I0601 11:03:39.203119  232046 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/embed-certs-20220601110327-6708/apiserver.crt.31bdca25: {Name:mkbc3d579be8c4dc7924c3d14d84afaa6c55b0d8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0601 11:03:39.203293  232046 crypto.go:164] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/embed-certs-20220601110327-6708/apiserver.key.31bdca25 ...
	I0601 11:03:39.203306  232046 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/embed-certs-20220601110327-6708/apiserver.key.31bdca25: {Name:mk4f3e4a8d736c0ee906145e7cc0096de085b853 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0601 11:03:39.203388  232046 certs.go:320] copying /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/embed-certs-20220601110327-6708/apiserver.crt.31bdca25 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/embed-certs-20220601110327-6708/apiserver.crt
	I0601 11:03:39.203449  232046 certs.go:324] copying /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/embed-certs-20220601110327-6708/apiserver.key.31bdca25 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/embed-certs-20220601110327-6708/apiserver.key
	I0601 11:03:39.203496  232046 certs.go:302] generating aggregator signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/embed-certs-20220601110327-6708/proxy-client.key
	I0601 11:03:39.203509  232046 crypto.go:68] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/embed-certs-20220601110327-6708/proxy-client.crt with IP's: []
	I0601 11:03:39.412522  232046 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/embed-certs-20220601110327-6708/proxy-client.crt ...
	I0601 11:03:39.412549  232046 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/embed-certs-20220601110327-6708/proxy-client.crt: {Name:mk3219f008b9eb06ac1b5c7c488bd5ddb176e387 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0601 11:03:39.412738  232046 crypto.go:164] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/embed-certs-20220601110327-6708/proxy-client.key ...
	I0601 11:03:39.412751  232046 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/embed-certs-20220601110327-6708/proxy-client.key: {Name:mkf15b71171093bbb6fa967d367fb967f44f4dab Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0601 11:03:39.412919  232046 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/6708.pem (1338 bytes)
	W0601 11:03:39.412954  232046 certs.go:384] ignoring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/6708_empty.pem, impossibly tiny 0 bytes
	I0601 11:03:39.412968  232046 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca-key.pem (1679 bytes)
	I0601 11:03:39.412995  232046 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca.pem (1082 bytes)
	I0601 11:03:39.413022  232046 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/cert.pem (1123 bytes)
	I0601 11:03:39.413055  232046 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/key.pem (1679 bytes)
	I0601 11:03:39.413097  232046 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files/etc/ssl/certs/67082.pem (1708 bytes)
	I0601 11:03:39.413629  232046 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/embed-certs-20220601110327-6708/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0601 11:03:39.431855  232046 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/embed-certs-20220601110327-6708/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0601 11:03:39.448411  232046 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/embed-certs-20220601110327-6708/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0601 11:03:39.465213  232046 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/embed-certs-20220601110327-6708/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0601 11:03:39.481416  232046 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0601 11:03:39.500003  232046 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0601 11:03:39.517883  232046 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0601 11:03:39.534165  232046 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0601 11:03:39.550900  232046 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files/etc/ssl/certs/67082.pem --> /usr/share/ca-certificates/67082.pem (1708 bytes)
	I0601 11:03:39.569091  232046 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0601 11:03:39.586565  232046 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/6708.pem --> /usr/share/ca-certificates/6708.pem (1338 bytes)
	I0601 11:03:39.603409  232046 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0601 11:03:39.615673  232046 ssh_runner.go:195] Run: openssl version
	I0601 11:03:39.620205  232046 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0601 11:03:39.627365  232046 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0601 11:03:39.630360  232046 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Jun  1 10:20 /usr/share/ca-certificates/minikubeCA.pem
	I0601 11:03:39.630404  232046 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0601 11:03:39.635070  232046 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0601 11:03:39.642021  232046 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6708.pem && ln -fs /usr/share/ca-certificates/6708.pem /etc/ssl/certs/6708.pem"
	I0601 11:03:39.649356  232046 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6708.pem
	I0601 11:03:39.652354  232046 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Jun  1 10:24 /usr/share/ca-certificates/6708.pem
	I0601 11:03:39.652403  232046 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6708.pem
	I0601 11:03:39.657214  232046 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6708.pem /etc/ssl/certs/51391683.0"
	I0601 11:03:39.664554  232046 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/67082.pem && ln -fs /usr/share/ca-certificates/67082.pem /etc/ssl/certs/67082.pem"
	I0601 11:03:39.672005  232046 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/67082.pem
	I0601 11:03:39.675027  232046 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Jun  1 10:24 /usr/share/ca-certificates/67082.pem
	I0601 11:03:39.675084  232046 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/67082.pem
	I0601 11:03:39.680552  232046 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/67082.pem /etc/ssl/certs/3ec20f2e.0"
	I0601 11:03:39.687796  232046 kubeadm.go:395] StartCluster: {Name:embed-certs-20220601110327-6708 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:embed-certs-20220601110327-6708 Namespace:default APIServerName
:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0601 11:03:39.687929  232046 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0601 11:03:39.687982  232046 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0601 11:03:39.713887  232046 cri.go:87] found id: ""
	I0601 11:03:39.713948  232046 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0601 11:03:39.720682  232046 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0601 11:03:39.727314  232046 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0601 11:03:39.727367  232046 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0601 11:03:39.733877  232046 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0601 11:03:39.733909  232046 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0601 11:03:39.981267  232046 out.go:204]   - Generating certificates and keys ...
	I0601 11:03:43.132417  232046 out.go:204]   - Booting up control plane ...
	I0601 11:03:55.175235  232046 out.go:204]   - Configuring RBAC rules ...
	I0601 11:03:55.599695  232046 cni.go:95] Creating CNI manager for ""
	I0601 11:03:55.599715  232046 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0601 11:03:55.601639  232046 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0601 11:03:55.603071  232046 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0601 11:03:55.606486  232046 cni.go:189] applying CNI manifest using /var/lib/minikube/binaries/v1.23.6/kubectl ...
	I0601 11:03:55.606506  232046 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0601 11:03:55.619251  232046 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0601 11:03:56.339205  232046 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0601 11:03:56.339270  232046 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:03:56.339277  232046 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl label nodes minikube.k8s.io/version=v1.26.0-beta.1 minikube.k8s.io/commit=4a356b2b7b41c6be3e1e342298908c27bb98ce92 minikube.k8s.io/name=embed-certs-20220601110327-6708 minikube.k8s.io/updated_at=2022_06_01T11_03_56_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:03:56.397220  232046 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:03:56.397227  232046 ops.go:34] apiserver oom_adj: -16
	I0601 11:03:56.967112  232046 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:03:57.467038  232046 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:03:57.967295  232046 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:03:58.466669  232046 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:03:58.967048  232046 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:03:59.467227  232046 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:03:59.967474  232046 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:04:00.466592  232046 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:04:00.967143  232046 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:04:01.466729  232046 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:04:01.967157  232046 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:04:02.466630  232046 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:04:02.966992  232046 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:04:03.467545  232046 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:04:03.967436  232046 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:04:04.466899  232046 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:04:04.967367  232046 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:04:05.466884  232046 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:04:05.967312  232046 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:04:06.467257  232046 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:04:06.967334  232046 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:04:07.467296  232046 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:04:07.967315  232046 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:04:08.024665  232046 kubeadm.go:1045] duration metric: took 11.68546273s to wait for elevateKubeSystemPrivileges.
	I0601 11:04:08.024697  232046 kubeadm.go:397] StartCluster complete in 28.336910367s
	I0601 11:04:08.024717  232046 settings.go:142] acquiring lock: {Name:mk20a847233fa50399ab0a24280bffb8d8dbd41b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0601 11:04:08.024830  232046 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig
	I0601 11:04:08.027161  232046 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig: {Name:mkb0e16236e54fbc8651999f1dd70854c53de7db Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0601 11:04:08.543516  232046 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "embed-certs-20220601110327-6708" rescaled to 1
	I0601 11:04:08.543603  232046 start.go:208] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0601 11:04:08.543640  232046 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0601 11:04:08.545037  232046 out.go:177] * Verifying Kubernetes components...
	I0601 11:04:08.543696  232046 addons.go:415] enableAddons start: toEnable=map[], additional=[]
	I0601 11:04:08.543852  232046 config.go:178] Loaded profile config "embed-certs-20220601110327-6708": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.6
	I0601 11:04:08.546841  232046 addons.go:65] Setting storage-provisioner=true in profile "embed-certs-20220601110327-6708"
	I0601 11:04:08.546847  232046 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0601 11:04:08.546861  232046 addons.go:153] Setting addon storage-provisioner=true in "embed-certs-20220601110327-6708"
	W0601 11:04:08.546870  232046 addons.go:165] addon storage-provisioner should already be in state true
	I0601 11:04:08.546910  232046 host.go:66] Checking if "embed-certs-20220601110327-6708" exists ...
	I0601 11:04:08.546841  232046 addons.go:65] Setting default-storageclass=true in profile "embed-certs-20220601110327-6708"
	I0601 11:04:08.546943  232046 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-20220601110327-6708"
	I0601 11:04:08.547302  232046 cli_runner.go:164] Run: docker container inspect embed-certs-20220601110327-6708 --format={{.State.Status}}
	I0601 11:04:08.547445  232046 cli_runner.go:164] Run: docker container inspect embed-certs-20220601110327-6708 --format={{.State.Status}}
	I0601 11:04:08.593782  232046 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0601 11:04:08.595669  232046 addons.go:348] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0601 11:04:08.595685  232046 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0601 11:04:08.595723  232046 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220601110327-6708
	I0601 11:04:08.596975  232046 addons.go:153] Setting addon default-storageclass=true in "embed-certs-20220601110327-6708"
	W0601 11:04:08.596999  232046 addons.go:165] addon default-storageclass should already be in state true
	I0601 11:04:08.597026  232046 host.go:66] Checking if "embed-certs-20220601110327-6708" exists ...
	I0601 11:04:08.597518  232046 cli_runner.go:164] Run: docker container inspect embed-certs-20220601110327-6708 --format={{.State.Status}}
	I0601 11:04:08.634812  232046 addons.go:348] installing /etc/kubernetes/addons/storageclass.yaml
	I0601 11:04:08.634837  232046 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0601 11:04:08.634892  232046 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220601110327-6708
	I0601 11:04:08.635276  232046 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49412 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/embed-certs-20220601110327-6708/id_rsa Username:docker}
	I0601 11:04:08.659080  232046 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0601 11:04:08.660513  232046 node_ready.go:35] waiting up to 6m0s for node "embed-certs-20220601110327-6708" to be "Ready" ...
	I0601 11:04:08.675484  232046 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49412 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/embed-certs-20220601110327-6708/id_rsa Username:docker}
	I0601 11:04:08.872802  232046 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0601 11:04:08.874563  232046 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0601 11:04:09.168535  232046 start.go:806] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS
	I0601 11:04:09.302126  232046 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0601 11:04:09.303413  232046 addons.go:417] enableAddons completed in 759.705563ms
	I0601 11:04:10.668133  232046 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:04:13.167682  232046 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:04:15.167939  232046 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:04:17.168355  232046 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:04:19.667843  232046 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:04:22.168040  232046 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:04:24.667624  232046 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:04:27.167914  232046 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:04:29.667704  232046 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:04:31.667923  232046 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:04:33.668079  232046 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:04:35.668141  232046 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:04:38.168032  232046 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:04:40.667708  232046 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:04:43.168490  232046 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:04:45.667800  232046 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:04:48.168484  232046 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:04:50.668184  232046 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:04:52.670024  232046 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:04:55.167793  232046 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:04:57.168104  232046 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:04:59.667975  232046 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:05:02.167715  232046 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:05:04.168039  232046 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:05:06.168253  232046 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:05:08.668021  232046 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:05:10.668092  232046 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:05:13.168241  232046 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:05:15.667982  232046 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:05:18.167489  232046 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:05:20.168333  232046 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:05:22.667531  232046 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:05:25.168143  232046 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:05:27.668047  232046 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:05:30.168180  232046 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:05:32.168254  232046 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:05:34.667768  232046 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:05:37.168260  232046 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:05:39.667602  232046 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:05:41.667719  232046 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:05:44.167462  232046 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:05:46.168129  232046 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:05:48.667659  232046 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:05:50.668172  232046 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:05:53.168223  232046 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:05:55.168284  232046 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:05:57.667781  232046 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:06:00.167806  232046 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:06:02.167999  232046 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:06:04.169084  232046 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:06:06.667855  232046 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:06:09.168184  232046 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:06:11.668034  232046 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:06:14.168155  232046 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:06:16.667946  232046 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:06:19.167962  232046 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:06:21.668395  232046 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:06:24.168491  232046 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:06:26.668038  232046 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:06:28.668526  232046 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:06:31.167999  232046 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:06:33.168156  232046 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:06:35.168451  232046 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:06:37.667203  232046 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:06:39.667972  232046 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:06:42.167953  232046 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:06:44.667807  232046 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:06:47.168499  232046 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:06:49.668528  232046 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:06:52.168292  232046 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:06:54.668423  232046 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:06:57.168205  232046 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:06:59.334049  232046 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:07:01.667968  232046 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:07:03.668636  232046 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:07:06.167338  232046 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:07:08.167714  232046 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:07:10.168162  232046 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:07:12.667278  232046 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:07:14.668246  232046 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:07:17.168197  232046 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:07:19.668390  232046 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:07:21.668490  232046 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:07:24.167646  232046 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:07:26.168090  232046 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:07:28.667871  232046 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:07:30.668198  232046 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:07:33.167882  232046 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:07:35.168161  232046 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:07:37.168296  232046 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:07:39.667840  232046 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:07:42.167654  232046 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:07:44.667876  232046 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:07:47.168006  232046 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:07:49.168183  232046 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:07:51.667932  232046 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:07:54.167913  232046 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:07:56.167953  232046 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:07:58.168275  232046 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:08:00.668037  232046 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:08:03.167969  232046 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:08:05.667837  232046 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:08:08.168013  232046 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:08:08.670567  232046 node_ready.go:38] duration metric: took 4m0.010022239s waiting for node "embed-certs-20220601110327-6708" to be "Ready" ...
	I0601 11:08:08.673338  232046 out.go:177] 
	W0601 11:08:08.675576  232046 out.go:239] X Exiting due to GUEST_START: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: timed out waiting for the condition
	X Exiting due to GUEST_START: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: timed out waiting for the condition
	W0601 11:08:08.675599  232046 out.go:239] * 
	* 
	W0601 11:08:08.676630  232046 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0601 11:08:08.678476  232046 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:190: failed starting minikube -first start-. args "out/minikube-linux-amd64 start -p embed-certs-20220601110327-6708 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.23.6": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/embed-certs/serial/FirstStart]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect embed-certs-20220601110327-6708
helpers_test.go:235: (dbg) docker inspect embed-certs-20220601110327-6708:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "b77a5d5e61bfa6e31aa165e07cef7da486c7219a464787e9c662bc91861f785d",
	        "Created": "2022-06-01T11:03:36.104826313Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 232853,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-06-01T11:03:36.476018297Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:5fc9565d342f677dd8987c0c7656d8d58147ab45c932c9076935b38a770e4cf7",
	        "ResolvConfPath": "/var/lib/docker/containers/b77a5d5e61bfa6e31aa165e07cef7da486c7219a464787e9c662bc91861f785d/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/b77a5d5e61bfa6e31aa165e07cef7da486c7219a464787e9c662bc91861f785d/hostname",
	        "HostsPath": "/var/lib/docker/containers/b77a5d5e61bfa6e31aa165e07cef7da486c7219a464787e9c662bc91861f785d/hosts",
	        "LogPath": "/var/lib/docker/containers/b77a5d5e61bfa6e31aa165e07cef7da486c7219a464787e9c662bc91861f785d/b77a5d5e61bfa6e31aa165e07cef7da486c7219a464787e9c662bc91861f785d-json.log",
	        "Name": "/embed-certs-20220601110327-6708",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-20220601110327-6708:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "embed-certs-20220601110327-6708",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/af793a78707e49005a668827ee182b67b74ca83491cbfb43256e792a6be931d8-init/diff:/var/lib/docker/overlay2/b8d8db1a2aa7e68bc30cc8784920412f67de75d287f26f41df14fe77e9cd01aa/diff:/var/lib/docker/overlay2/e05914bc392f34941f075ddf1f08984ba653e41980a3a12b3f0ec7bc66faed2c/diff:/var/lib/docker/overlay2/311290be1e68cfaf248422398f2ee037b662e83e8a80c393cf1f35af46c871e3/diff:/var/lib/docker/overlay2/df5bd86ed0e0c9740e1a389f0dd435837b730043ff123ee198aa28f82511f21c/diff:/var/lib/docker/overlay2/52ebf0baed5a1f4378d84fd6b94c9c1756fef7c7837d0766f77ef29b2104f39c/diff:/var/lib/docker/overlay2/19fdc64f1948c51309fb85761a84e7787a06f91f43e3d554c0507c5291bda802/diff:/var/lib/docker/overlay2/8fbe94a5f5ec7e7b87e3294d917aaba04c0d5a1072a4876ca3171f7b416e78d1/diff:/var/lib/docker/overlay2/3f2aec9d3b6e64c9251bd84e36639908a944a57741768984d5a66f5f34ed2288/diff:/var/lib/docker/overlay2/f4b59ae0a52d88a0325a281689c43f769408e14f70b9c94b771e70af140d7538/diff:/var/lib/docker/overlay2/1b9610
0ef86fc38fba7ce796c89f6a0f0c16c7fc1a94ff1a7f2021a01dd5471e/diff:/var/lib/docker/overlay2/10388fac28961ac0e67628b98602c8c77b04d12b21cd25a8d9adc05c1261252b/diff:/var/lib/docker/overlay2/efcc44ba0e0e6acd23e74db49106211785c2519f22b858252b43b17e927095e4/diff:/var/lib/docker/overlay2/3dbc9b9ec4e689c631fadb88ec67ad1f44f1059642f0d9e9740fa4523681476a/diff:/var/lib/docker/overlay2/196519dbdfaceb51fe77bd0517b4a726c63133e2f1a44cc71539d859f920996f/diff:/var/lib/docker/overlay2/bf269014ddb2ded291bc1f7a299a168567e8ffb016d7d4ba5ad3681eb1c988ba/diff:/var/lib/docker/overlay2/dc3a1c86dd14e2cba77aa4a9424d61aa34efc817c2c14b8f79d66fb1c19132ea/diff:/var/lib/docker/overlay2/7151cfa81a30a8e2bacf0e7e97eb21e825844e9ee99d4d774c443cf4df3042bf/diff:/var/lib/docker/overlay2/4827c7b9079e215b353e51dd539d7e28bf73341ea0a494df4654e9fd1b53d16c/diff:/var/lib/docker/overlay2/2da0eda04306aaeacd19a5cc85b885230b88e74f1791bdb022ebf4b39d85fae2/diff:/var/lib/docker/overlay2/1a0bdf8971fb0a44ff0e168623dbbb2d84b8e93f6d20d9351efd68470e4d4851/diff:/var/lib/d
ocker/overlay2/9ced6f582fc4ce00064f1f5e6ac922d838648fe94afc8e015c309e04004f10ca/diff:/var/lib/docker/overlay2/dd6d4c3166eb565aff2516953d5a8877a204214f2436d414475132ae70429cf7/diff:/var/lib/docker/overlay2/a1ace060e85891d54b26ff4be9fcbce36ffbb15cc4061eb4ccf0add8f82783df/diff:/var/lib/docker/overlay2/bc8b93bfba93e7da2c573ae8b6405ebff526153f6a8b0659aebaf044dc7e8f43/diff:/var/lib/docker/overlay2/c6292624b658b5761ddc277e4b50f1bd9d32fb9a2ad4a01647d6482fa0d29eb3/diff:/var/lib/docker/overlay2/cfe8f35eeb0a80a3c747eac0dfd9195da1fa3d9f92f0b7866d46f3517f3b10ee/diff:/var/lib/docker/overlay2/90bc3b9378b5761de7575f1a82d48e4b4ebf50af153eafa5a565585e136b87f8/diff:/var/lib/docker/overlay2/e1b928a2483870df7fdf4adb3b4002f9effe1db7fbff925b24005f47290b2915/diff:/var/lib/docker/overlay2/4758c75ab63fd3f43ae7b654bc93566ded69acea0e92caf3043ef6eeeec9ca1b/diff:/var/lib/docker/overlay2/8558da5809877030d87982052e5011fb04b8bb9646d0f3c1d4aa10f2d7926592/diff:/var/lib/docker/overlay2/f6400cfae811f55736f55064c8949e18ac2dc1175a5149bb0382a79fa92
4f3f3/diff:/var/lib/docker/overlay2/875d5278ff8445b84261d4586c6a14fbbd9c13beff1fe9252591162e4d91a0dc/diff:/var/lib/docker/overlay2/12d9f85a229e1386e37fb609020fdcb5535c67ce811012fd646182559e4ee754/diff:/var/lib/docker/overlay2/d5c5b85272d7a8b0b62da488c8cba8d883a0adcfc9f1b2f6ad2f856b4f13e5f7/diff:/var/lib/docker/overlay2/5f6feb9e059c22491e287804f38f097fda984dc9376ba28ae810e13dcf27394f/diff:/var/lib/docker/overlay2/113a715b1135f09b959295e960aeaa36846ad54a6fe46fdd53d061bc3fe114e3/diff:/var/lib/docker/overlay2/265096a57a98130b8073aa41d0a22f0ada5a391943e490ac1c281a634e22cba0/diff:/var/lib/docker/overlay2/15f57e6dc9a6b382240a9335aae067454048b2eb6d0094f2d3c8c115be34311a/diff:/var/lib/docker/overlay2/45ca8d44d46a41b4493f52c0048d08b8d5ff4c1b28233ab8940f6422df1f1273/diff:/var/lib/docker/overlay2/d555005a13c251ef928389a1b06d251a17378cf3ec68af5a4d6c849412d3f69f/diff:/var/lib/docker/overlay2/80b8c06850dacfe2ca4db4e0fa4d2d0dd6997cf495a7f7da9b95a996694144c5/diff",
	                "MergedDir": "/var/lib/docker/overlay2/af793a78707e49005a668827ee182b67b74ca83491cbfb43256e792a6be931d8/merged",
	                "UpperDir": "/var/lib/docker/overlay2/af793a78707e49005a668827ee182b67b74ca83491cbfb43256e792a6be931d8/diff",
	                "WorkDir": "/var/lib/docker/overlay2/af793a78707e49005a668827ee182b67b74ca83491cbfb43256e792a6be931d8/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-20220601110327-6708",
	                "Source": "/var/lib/docker/volumes/embed-certs-20220601110327-6708/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-20220601110327-6708",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-20220601110327-6708",
	                "name.minikube.sigs.k8s.io": "embed-certs-20220601110327-6708",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "6e07617b2a6be7f1d7fcd4f72c38164dc41010e13179d5f3d71f30078705fa21",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49412"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49411"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49408"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49410"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49409"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/6e07617b2a6b",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-20220601110327-6708": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "b77a5d5e61bf",
	                        "embed-certs-20220601110327-6708"
	                    ],
	                    "NetworkID": "85c31b5e416e869b4ae1612c11e4fd39718a187a5009c211794c61313cb0c682",
	                    "EndpointID": "8df55589072b1e0d65a42a89f9b0e4d5153d5de972481a98d522d287ef34389c",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-20220601110327-6708 -n embed-certs-20220601110327-6708
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/FirstStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/FirstStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-20220601110327-6708 logs -n 25
helpers_test.go:252: TestStartStop/group/embed-certs/serial/FirstStart logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------|-------------------------------------------|---------|----------------|---------------------|---------------------|
	| Command |                       Args                        |                  Profile                  |  User   |    Version     |     Start Time      |      End Time       |
	|---------|---------------------------------------------------|-------------------------------------------|---------|----------------|---------------------|---------------------|
	| start   | -p                                                | enable-default-cni-20220601104837-6708    | jenkins | v1.26.0-beta.1 | 01 Jun 22 10:56 UTC | 01 Jun 22 10:57 UTC |
	|         | enable-default-cni-20220601104837-6708            |                                           |         |                |                     |                     |
	|         | --memory=2048 --alsologtostderr                   |                                           |         |                |                     |                     |
	|         | --wait=true --wait-timeout=5m                     |                                           |         |                |                     |                     |
	|         | --enable-default-cni=true                         |                                           |         |                |                     |                     |
	|         | --driver=docker                                   |                                           |         |                |                     |                     |
	|         | --container-runtime=containerd                    |                                           |         |                |                     |                     |
	| ssh     | -p                                                | enable-default-cni-20220601104837-6708    | jenkins | v1.26.0-beta.1 | 01 Jun 22 10:57 UTC | 01 Jun 22 10:57 UTC |
	|         | enable-default-cni-20220601104837-6708            |                                           |         |                |                     |                     |
	|         | pgrep -a kubelet                                  |                                           |         |                |                     |                     |
	| delete  | -p                                                | enable-default-cni-20220601104837-6708    | jenkins | v1.26.0-beta.1 | 01 Jun 22 10:58 UTC | 01 Jun 22 10:58 UTC |
	|         | enable-default-cni-20220601104837-6708            |                                           |         |                |                     |                     |
	| start   | -p bridge-20220601104837-6708                     | bridge-20220601104837-6708                | jenkins | v1.26.0-beta.1 | 01 Jun 22 10:57 UTC | 01 Jun 22 10:58 UTC |
	|         | --memory=2048                                     |                                           |         |                |                     |                     |
	|         | --alsologtostderr                                 |                                           |         |                |                     |                     |
	|         | --wait=true --wait-timeout=5m                     |                                           |         |                |                     |                     |
	|         | --cni=bridge --driver=docker                      |                                           |         |                |                     |                     |
	|         | --container-runtime=containerd                    |                                           |         |                |                     |                     |
	| ssh     | -p bridge-20220601104837-6708                     | bridge-20220601104837-6708                | jenkins | v1.26.0-beta.1 | 01 Jun 22 10:58 UTC | 01 Jun 22 10:58 UTC |
	|         | pgrep -a kubelet                                  |                                           |         |                |                     |                     |
	| delete  | -p bridge-20220601104837-6708                     | bridge-20220601104837-6708                | jenkins | v1.26.0-beta.1 | 01 Jun 22 10:58 UTC | 01 Jun 22 10:58 UTC |
	| start   | -p calico-20220601104839-6708                     | calico-20220601104839-6708                | jenkins | v1.26.0-beta.1 | 01 Jun 22 10:57 UTC | 01 Jun 22 10:59 UTC |
	|         | --memory=2048                                     |                                           |         |                |                     |                     |
	|         | --alsologtostderr                                 |                                           |         |                |                     |                     |
	|         | --wait=true --wait-timeout=5m                     |                                           |         |                |                     |                     |
	|         | --cni=calico --driver=docker                      |                                           |         |                |                     |                     |
	|         | --container-runtime=containerd                    |                                           |         |                |                     |                     |
	| ssh     | -p calico-20220601104839-6708                     | calico-20220601104839-6708                | jenkins | v1.26.0-beta.1 | 01 Jun 22 10:59 UTC | 01 Jun 22 10:59 UTC |
	|         | pgrep -a kubelet                                  |                                           |         |                |                     |                     |
	| start   | -p cilium-20220601104839-6708                     | cilium-20220601104839-6708                | jenkins | v1.26.0-beta.1 | 01 Jun 22 10:58 UTC | 01 Jun 22 10:59 UTC |
	|         | --memory=2048                                     |                                           |         |                |                     |                     |
	|         | --alsologtostderr                                 |                                           |         |                |                     |                     |
	|         | --wait=true --wait-timeout=5m                     |                                           |         |                |                     |                     |
	|         | --cni=cilium --driver=docker                      |                                           |         |                |                     |                     |
	|         | --container-runtime=containerd                    |                                           |         |                |                     |                     |
	| ssh     | -p cilium-20220601104839-6708                     | cilium-20220601104839-6708                | jenkins | v1.26.0-beta.1 | 01 Jun 22 10:59 UTC | 01 Jun 22 10:59 UTC |
	|         | pgrep -a kubelet                                  |                                           |         |                |                     |                     |
	| delete  | -p cilium-20220601104839-6708                     | cilium-20220601104839-6708                | jenkins | v1.26.0-beta.1 | 01 Jun 22 10:59 UTC | 01 Jun 22 10:59 UTC |
	| start   | -p                                                | no-preload-20220601105939-6708            | jenkins | v1.26.0-beta.1 | 01 Jun 22 10:59 UTC | 01 Jun 22 11:00 UTC |
	|         | no-preload-20220601105939-6708                    |                                           |         |                |                     |                     |
	|         | --memory=2200                                     |                                           |         |                |                     |                     |
	|         | --alsologtostderr                                 |                                           |         |                |                     |                     |
	|         | --wait=true --preload=false                       |                                           |         |                |                     |                     |
	|         | --driver=docker                                   |                                           |         |                |                     |                     |
	|         | --container-runtime=containerd                    |                                           |         |                |                     |                     |
	|         | --kubernetes-version=v1.23.6                      |                                           |         |                |                     |                     |
	| addons  | enable metrics-server -p                          | no-preload-20220601105939-6708            | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:00 UTC | 01 Jun 22 11:00 UTC |
	|         | no-preload-20220601105939-6708                    |                                           |         |                |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                           |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain            |                                           |         |                |                     |                     |
	| stop    | -p                                                | no-preload-20220601105939-6708            | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:00 UTC | 01 Jun 22 11:01 UTC |
	|         | no-preload-20220601105939-6708                    |                                           |         |                |                     |                     |
	|         | --alsologtostderr -v=3                            |                                           |         |                |                     |                     |
	| addons  | enable dashboard -p                               | no-preload-20220601105939-6708            | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:01 UTC | 01 Jun 22 11:01 UTC |
	|         | no-preload-20220601105939-6708                    |                                           |         |                |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                           |         |                |                     |                     |
	| logs    | auto-20220601104837-6708 logs                     | auto-20220601104837-6708                  | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:03 UTC | 01 Jun 22 11:03 UTC |
	|         | -n 25                                             |                                           |         |                |                     |                     |
	| delete  | -p auto-20220601104837-6708                       | auto-20220601104837-6708                  | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:03 UTC | 01 Jun 22 11:03 UTC |
	| logs    | old-k8s-version-20220601105850-6708               | old-k8s-version-20220601105850-6708       | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:03 UTC | 01 Jun 22 11:03 UTC |
	|         | logs -n 25                                        |                                           |         |                |                     |                     |
	| start   | -p                                                | no-preload-20220601105939-6708            | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:01 UTC | 01 Jun 22 11:06 UTC |
	|         | no-preload-20220601105939-6708                    |                                           |         |                |                     |                     |
	|         | --memory=2200                                     |                                           |         |                |                     |                     |
	|         | --alsologtostderr                                 |                                           |         |                |                     |                     |
	|         | --wait=true --preload=false                       |                                           |         |                |                     |                     |
	|         | --driver=docker                                   |                                           |         |                |                     |                     |
	|         | --container-runtime=containerd                    |                                           |         |                |                     |                     |
	|         | --kubernetes-version=v1.23.6                      |                                           |         |                |                     |                     |
	| ssh     | -p                                                | no-preload-20220601105939-6708            | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:06 UTC | 01 Jun 22 11:06 UTC |
	|         | no-preload-20220601105939-6708                    |                                           |         |                |                     |                     |
	|         | sudo crictl images -o json                        |                                           |         |                |                     |                     |
	| pause   | -p                                                | no-preload-20220601105939-6708            | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:06 UTC | 01 Jun 22 11:06 UTC |
	|         | no-preload-20220601105939-6708                    |                                           |         |                |                     |                     |
	|         | --alsologtostderr -v=1                            |                                           |         |                |                     |                     |
	| unpause | -p                                                | no-preload-20220601105939-6708            | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:06 UTC | 01 Jun 22 11:06 UTC |
	|         | no-preload-20220601105939-6708                    |                                           |         |                |                     |                     |
	|         | --alsologtostderr -v=1                            |                                           |         |                |                     |                     |
	| delete  | -p                                                | no-preload-20220601105939-6708            | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:06 UTC | 01 Jun 22 11:06 UTC |
	|         | no-preload-20220601105939-6708                    |                                           |         |                |                     |                     |
	| delete  | -p                                                | no-preload-20220601105939-6708            | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:06 UTC | 01 Jun 22 11:06 UTC |
	|         | no-preload-20220601105939-6708                    |                                           |         |                |                     |                     |
	| delete  | -p                                                | disable-driver-mounts-20220601110654-6708 | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:06 UTC | 01 Jun 22 11:06 UTC |
	|         | disable-driver-mounts-20220601110654-6708         |                                           |         |                |                     |                     |
	|---------|---------------------------------------------------|-------------------------------------------|---------|----------------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/06/01 11:06:54
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.18.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0601 11:06:54.667302  244383 out.go:296] Setting OutFile to fd 1 ...
	I0601 11:06:54.667430  244383 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0601 11:06:54.667448  244383 out.go:309] Setting ErrFile to fd 2...
	I0601 11:06:54.667455  244383 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0601 11:06:54.667611  244383 root.go:322] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/bin
	I0601 11:06:54.668037  244383 out.go:303] Setting JSON to false
	I0601 11:06:54.669846  244383 start.go:115] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":2969,"bootTime":1654078646,"procs":645,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.13.0-1027-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0601 11:06:54.669914  244383 start.go:125] virtualization: kvm guest
	I0601 11:06:54.672039  244383 out.go:177] * [default-k8s-different-port-20220601110654-6708] minikube v1.26.0-beta.1 on Ubuntu 20.04 (kvm/amd64)
	I0601 11:06:54.673519  244383 out.go:177]   - MINIKUBE_LOCATION=14079
	I0601 11:06:54.673532  244383 notify.go:193] Checking for updates...
	I0601 11:06:54.676498  244383 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0601 11:06:54.678066  244383 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig
	I0601 11:06:54.679578  244383 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube
	I0601 11:06:54.681049  244383 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0601 11:06:54.682891  244383 config.go:178] Loaded profile config "calico-20220601104839-6708": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.6
	I0601 11:06:54.683008  244383 config.go:178] Loaded profile config "embed-certs-20220601110327-6708": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.6
	I0601 11:06:54.683105  244383 config.go:178] Loaded profile config "old-k8s-version-20220601105850-6708": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.16.0
	I0601 11:06:54.683158  244383 driver.go:358] Setting default libvirt URI to qemu:///system
	I0601 11:06:54.724298  244383 docker.go:137] docker version: linux-20.10.16
	I0601 11:06:54.724374  244383 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0601 11:06:54.826819  244383 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:76 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:48 OomKillDisable:true NGoroutines:49 SystemTime:2022-06-01 11:06:54.7540349 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1027-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexServe
rAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662795776 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:20.10.16 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] ClientI
nfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0601 11:06:54.826932  244383 docker.go:254] overlay module found
	I0601 11:06:54.829003  244383 out.go:177] * Using the docker driver based on user configuration
	I0601 11:06:54.830315  244383 start.go:284] selected driver: docker
	I0601 11:06:54.830327  244383 start.go:806] validating driver "docker" against <nil>
	I0601 11:06:54.830352  244383 start.go:817] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0601 11:06:54.831265  244383 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0601 11:06:54.931062  244383 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:76 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:48 OomKillDisable:true NGoroutines:49 SystemTime:2022-06-01 11:06:54.859997014 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1027-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662795776 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:20.10.16 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0601 11:06:54.931188  244383 start_flags.go:292] no existing cluster config was found, will generate one from the flags 
	I0601 11:06:54.931414  244383 start_flags.go:847] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0601 11:06:54.933788  244383 out.go:177] * Using Docker driver with the root privilege
	I0601 11:06:54.935205  244383 cni.go:95] Creating CNI manager for ""
	I0601 11:06:54.935218  244383 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0601 11:06:54.935233  244383 cni.go:225] auto-setting extra-config to "kubelet.cni-conf-dir=/etc/cni/net.mk"
	I0601 11:06:54.935238  244383 cni.go:230] extra-config set to "kubelet.cni-conf-dir=/etc/cni/net.mk"
	I0601 11:06:54.935243  244383 start_flags.go:301] Found "CNI" CNI - setting NetworkPlugin=cni
	I0601 11:06:54.935250  244383 start_flags.go:306] config:
	{Name:default-k8s-different-port-20220601110654-6708 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:default-k8s-different-port-20220601110654-6708 Namespace:default APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0601 11:06:54.936846  244383 out.go:177] * Starting control plane node default-k8s-different-port-20220601110654-6708 in cluster default-k8s-different-port-20220601110654-6708
	I0601 11:06:54.938038  244383 cache.go:120] Beginning downloading kic base image for docker with containerd
	I0601 11:06:54.939519  244383 out.go:177] * Pulling base image ...
	I0601 11:06:54.940856  244383 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime containerd
	I0601 11:06:54.940881  244383 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a in local docker daemon
	I0601 11:06:54.940905  244383 preload.go:148] Found local preload: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.6-containerd-overlay2-amd64.tar.lz4
	I0601 11:06:54.940928  244383 cache.go:57] Caching tarball of preloaded images
	I0601 11:06:54.941154  244383 preload.go:174] Found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.6-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0601 11:06:54.941186  244383 cache.go:60] Finished verifying existence of preloaded tar for  v1.23.6 on containerd
	I0601 11:06:54.941308  244383 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/default-k8s-different-port-20220601110654-6708/config.json ...
	I0601 11:06:54.941333  244383 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/default-k8s-different-port-20220601110654-6708/config.json: {Name:mk8b3d87cba3844f82b835b906c4fc7fcf103163 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0601 11:06:54.986323  244383 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a in local docker daemon, skipping pull
	I0601 11:06:54.986351  244383 cache.go:141] gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a exists in daemon, skipping load
	I0601 11:06:54.986370  244383 cache.go:206] Successfully downloaded all kic artifacts
	I0601 11:06:54.986406  244383 start.go:352] acquiring machines lock for default-k8s-different-port-20220601110654-6708: {Name:mk7500f636009412c286b3a5b3a2182fb6b229b2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0601 11:06:54.986553  244383 start.go:356] acquired machines lock for "default-k8s-different-port-20220601110654-6708" in 123.17µs
	I0601 11:06:54.986588  244383 start.go:91] Provisioning new machine with config: &{Name:default-k8s-different-port-20220601110654-6708 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:default-k8s-different-por
t-20220601110654-6708 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.23.6 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:do
cker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false} &{Name: IP: Port:8444 KubernetesVersion:v1.23.6 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0601 11:06:54.986696  244383 start.go:131] createHost starting for "" (driver="docker")
	I0601 11:06:54.668423  232046 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:06:57.168205  232046 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:06:54.989283  244383 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0601 11:06:54.989495  244383 start.go:165] libmachine.API.Create for "default-k8s-different-port-20220601110654-6708" (driver="docker")
	I0601 11:06:54.989523  244383 client.go:168] LocalClient.Create starting
	I0601 11:06:54.989576  244383 main.go:134] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca.pem
	I0601 11:06:54.989602  244383 main.go:134] libmachine: Decoding PEM data...
	I0601 11:06:54.989620  244383 main.go:134] libmachine: Parsing certificate...
	I0601 11:06:54.989670  244383 main.go:134] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/cert.pem
	I0601 11:06:54.989686  244383 main.go:134] libmachine: Decoding PEM data...
	I0601 11:06:54.989697  244383 main.go:134] libmachine: Parsing certificate...
	I0601 11:06:54.990003  244383 cli_runner.go:164] Run: docker network inspect default-k8s-different-port-20220601110654-6708 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0601 11:06:55.021531  244383 cli_runner.go:211] docker network inspect default-k8s-different-port-20220601110654-6708 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0601 11:06:55.021592  244383 network_create.go:272] running [docker network inspect default-k8s-different-port-20220601110654-6708] to gather additional debugging logs...
	I0601 11:06:55.021618  244383 cli_runner.go:164] Run: docker network inspect default-k8s-different-port-20220601110654-6708
	W0601 11:06:55.051948  244383 cli_runner.go:211] docker network inspect default-k8s-different-port-20220601110654-6708 returned with exit code 1
	I0601 11:06:55.051984  244383 network_create.go:275] error running [docker network inspect default-k8s-different-port-20220601110654-6708]: docker network inspect default-k8s-different-port-20220601110654-6708: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: default-k8s-different-port-20220601110654-6708
	I0601 11:06:55.052003  244383 network_create.go:277] output of [docker network inspect default-k8s-different-port-20220601110654-6708]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: default-k8s-different-port-20220601110654-6708
	
	** /stderr **
	I0601 11:06:55.052049  244383 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0601 11:06:55.083654  244383 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc0001322e0] misses:0}
	I0601 11:06:55.083702  244383 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0601 11:06:55.083718  244383 network_create.go:115] attempt to create docker network default-k8s-different-port-20220601110654-6708 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0601 11:06:55.083760  244383 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true default-k8s-different-port-20220601110654-6708
	I0601 11:06:55.150185  244383 network_create.go:99] docker network default-k8s-different-port-20220601110654-6708 192.168.49.0/24 created
	I0601 11:06:55.150232  244383 kic.go:106] calculated static IP "192.168.49.2" for the "default-k8s-different-port-20220601110654-6708" container
	I0601 11:06:55.150301  244383 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0601 11:06:55.185029  244383 cli_runner.go:164] Run: docker volume create default-k8s-different-port-20220601110654-6708 --label name.minikube.sigs.k8s.io=default-k8s-different-port-20220601110654-6708 --label created_by.minikube.sigs.k8s.io=true
	I0601 11:06:55.218896  244383 oci.go:103] Successfully created a docker volume default-k8s-different-port-20220601110654-6708
	I0601 11:06:55.218982  244383 cli_runner.go:164] Run: docker run --rm --name default-k8s-different-port-20220601110654-6708-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-different-port-20220601110654-6708 --entrypoint /usr/bin/test -v default-k8s-different-port-20220601110654-6708:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a -d /var/lib
	I0601 11:06:55.773802  244383 oci.go:107] Successfully prepared a docker volume default-k8s-different-port-20220601110654-6708
	I0601 11:06:55.773849  244383 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime containerd
	I0601 11:06:55.773871  244383 kic.go:179] Starting extracting preloaded images to volume ...
	I0601 11:06:55.773932  244383 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.6-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v default-k8s-different-port-20220601110654-6708:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a -I lz4 -xf /preloaded.tar -C /extractDir
	I0601 11:06:59.334049  232046 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:07:01.667968  232046 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:07:03.152484  244383 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.6-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v default-k8s-different-port-20220601110654-6708:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a -I lz4 -xf /preloaded.tar -C /extractDir: (7.378487132s)
	I0601 11:07:03.152523  244383 kic.go:188] duration metric: took 7.378645 seconds to extract preloaded images to volume
	W0601 11:07:03.152655  244383 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0601 11:07:03.152754  244383 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0601 11:07:03.258344  244383 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname default-k8s-different-port-20220601110654-6708 --name default-k8s-different-port-20220601110654-6708 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-different-port-20220601110654-6708 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=default-k8s-different-port-20220601110654-6708 --network default-k8s-different-port-20220601110654-6708 --ip 192.168.49.2 --volume default-k8s-different-port-20220601110654-6708:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8444 --publish=127.0.0.1::8444 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b3
2c52a
	I0601 11:07:03.640637  244383 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220601110654-6708 --format={{.State.Running}}
	I0601 11:07:03.675247  244383 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220601110654-6708 --format={{.State.Status}}
	I0601 11:07:03.707758  244383 cli_runner.go:164] Run: docker exec default-k8s-different-port-20220601110654-6708 stat /var/lib/dpkg/alternatives/iptables
	I0601 11:07:03.767985  244383 oci.go:247] the created container "default-k8s-different-port-20220601110654-6708" has a running status.
	I0601 11:07:03.768013  244383 kic.go:210] Creating ssh key for kic: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/default-k8s-different-port-20220601110654-6708/id_rsa...
	I0601 11:07:03.823786  244383 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/default-k8s-different-port-20220601110654-6708/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0601 11:07:03.917787  244383 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220601110654-6708 --format={{.State.Status}}
	I0601 11:07:03.956706  244383 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0601 11:07:03.956735  244383 kic_runner.go:114] Args: [docker exec --privileged default-k8s-different-port-20220601110654-6708 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0601 11:07:04.044516  244383 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220601110654-6708 --format={{.State.Status}}
	I0601 11:07:04.081442  244383 machine.go:88] provisioning docker machine ...
	I0601 11:07:04.081477  244383 ubuntu.go:169] provisioning hostname "default-k8s-different-port-20220601110654-6708"
	I0601 11:07:04.081535  244383 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220601110654-6708
	I0601 11:07:04.119200  244383 main.go:134] libmachine: Using SSH client type: native
	I0601 11:07:04.119405  244383 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7da240] 0x7dd2a0 <nil>  [] 0s} 127.0.0.1 49417 <nil> <nil>}
	I0601 11:07:04.119425  244383 main.go:134] libmachine: About to run SSH command:
	sudo hostname default-k8s-different-port-20220601110654-6708 && echo "default-k8s-different-port-20220601110654-6708" | sudo tee /etc/hostname
	I0601 11:07:04.249668  244383 main.go:134] libmachine: SSH cmd err, output: <nil>: default-k8s-different-port-20220601110654-6708
	
	I0601 11:07:04.249734  244383 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220601110654-6708
	I0601 11:07:04.283443  244383 main.go:134] libmachine: Using SSH client type: native
	I0601 11:07:04.283593  244383 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7da240] 0x7dd2a0 <nil>  [] 0s} 127.0.0.1 49417 <nil> <nil>}
	I0601 11:07:04.283628  244383 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-different-port-20220601110654-6708' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-different-port-20220601110654-6708/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-different-port-20220601110654-6708' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0601 11:07:04.395587  244383 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0601 11:07:04.395617  244383 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/c
erts/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube}
	I0601 11:07:04.395643  244383 ubuntu.go:177] setting up certificates
	I0601 11:07:04.395652  244383 provision.go:83] configureAuth start
	I0601 11:07:04.395697  244383 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-different-port-20220601110654-6708
	I0601 11:07:04.427413  244383 provision.go:138] copyHostCerts
	I0601 11:07:04.427469  244383 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/key.pem, removing ...
	I0601 11:07:04.427481  244383 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/key.pem
	I0601 11:07:04.427543  244383 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/key.pem (1679 bytes)
	I0601 11:07:04.427622  244383 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.pem, removing ...
	I0601 11:07:04.427632  244383 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.pem
	I0601 11:07:04.427659  244383 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.pem (1082 bytes)
	I0601 11:07:04.427708  244383 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cert.pem, removing ...
	I0601 11:07:04.427721  244383 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cert.pem
	I0601 11:07:04.427753  244383 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cert.pem (1123 bytes)
	I0601 11:07:04.427802  244383 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca-key.pem org=jenkins.default-k8s-different-port-20220601110654-6708 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube default-k8s-different-port-20220601110654-6708]
	I0601 11:07:04.535631  244383 provision.go:172] copyRemoteCerts
	I0601 11:07:04.535685  244383 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0601 11:07:04.535726  244383 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220601110654-6708
	I0601 11:07:04.568780  244383 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49417 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/default-k8s-different-port-20220601110654-6708/id_rsa Username:docker}
	I0601 11:07:04.659152  244383 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0601 11:07:04.676610  244383 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/server.pem --> /etc/docker/server.pem (1306 bytes)
	I0601 11:07:04.694731  244383 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0601 11:07:04.711549  244383 provision.go:86] duration metric: configureAuth took 315.887909ms
	I0601 11:07:04.711573  244383 ubuntu.go:193] setting minikube options for container-runtime
	I0601 11:07:04.711735  244383 config.go:178] Loaded profile config "default-k8s-different-port-20220601110654-6708": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.6
	I0601 11:07:04.711748  244383 machine.go:91] provisioned docker machine in 630.288068ms
	I0601 11:07:04.711754  244383 client.go:171] LocalClient.Create took 9.722222745s
	I0601 11:07:04.711778  244383 start.go:173] duration metric: libmachine.API.Create for "default-k8s-different-port-20220601110654-6708" took 9.722275215s
	I0601 11:07:04.711793  244383 start.go:306] post-start starting for "default-k8s-different-port-20220601110654-6708" (driver="docker")
	I0601 11:07:04.711800  244383 start.go:316] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0601 11:07:04.711844  244383 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0601 11:07:04.711903  244383 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220601110654-6708
	I0601 11:07:04.745536  244383 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49417 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/default-k8s-different-port-20220601110654-6708/id_rsa Username:docker}
	I0601 11:07:04.831037  244383 ssh_runner.go:195] Run: cat /etc/os-release
	I0601 11:07:04.833655  244383 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0601 11:07:04.833679  244383 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0601 11:07:04.833703  244383 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0601 11:07:04.833716  244383 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0601 11:07:04.833726  244383 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/addons for local assets ...
	I0601 11:07:04.833775  244383 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files for local assets ...
	I0601 11:07:04.833870  244383 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files/etc/ssl/certs/67082.pem -> 67082.pem in /etc/ssl/certs
	I0601 11:07:04.833975  244383 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0601 11:07:04.840420  244383 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files/etc/ssl/certs/67082.pem --> /etc/ssl/certs/67082.pem (1708 bytes)
	I0601 11:07:04.857187  244383 start.go:309] post-start completed in 145.384397ms
	I0601 11:07:04.857493  244383 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-different-port-20220601110654-6708
	I0601 11:07:04.888747  244383 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/default-k8s-different-port-20220601110654-6708/config.json ...
	I0601 11:07:04.888963  244383 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0601 11:07:04.889000  244383 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220601110654-6708
	I0601 11:07:04.919352  244383 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49417 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/default-k8s-different-port-20220601110654-6708/id_rsa Username:docker}
	I0601 11:07:05.000243  244383 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0601 11:07:05.004050  244383 start.go:134] duration metric: createHost completed in 10.017341223s
	I0601 11:07:05.004075  244383 start.go:81] releasing machines lock for "default-k8s-different-port-20220601110654-6708", held for 10.017502791s
	I0601 11:07:05.004171  244383 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-different-port-20220601110654-6708
	I0601 11:07:05.035905  244383 ssh_runner.go:195] Run: systemctl --version
	I0601 11:07:05.035960  244383 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220601110654-6708
	I0601 11:07:05.035972  244383 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0601 11:07:05.036031  244383 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220601110654-6708
	I0601 11:07:05.069327  244383 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49417 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/default-k8s-different-port-20220601110654-6708/id_rsa Username:docker}
	I0601 11:07:05.070632  244383 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49417 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/default-k8s-different-port-20220601110654-6708/id_rsa Username:docker}
	I0601 11:07:05.175990  244383 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0601 11:07:05.186279  244383 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0601 11:07:05.194913  244383 docker.go:187] disabling docker service ...
	I0601 11:07:05.194953  244383 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0601 11:07:05.211132  244383 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0601 11:07:05.219763  244383 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0601 11:07:05.302855  244383 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0601 11:07:05.379942  244383 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0601 11:07:05.388684  244383 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0601 11:07:05.401125  244383 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*sandbox_image = .*$|sandbox_image = "k8s.gcr.io/pause:3.6"|' -i /etc/containerd/config.toml"
	I0601 11:07:05.408798  244383 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*restrict_oom_score_adj = .*$|restrict_oom_score_adj = false|' -i /etc/containerd/config.toml"
	I0601 11:07:05.416626  244383 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*SystemdCgroup = .*$|SystemdCgroup = false|' -i /etc/containerd/config.toml"
	I0601 11:07:05.424218  244383 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*conf_dir = .*$|conf_dir = "/etc/cni/net.mk"|' -i /etc/containerd/config.toml"
	I0601 11:07:05.431786  244383 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^# imports|imports = ["/etc/containerd/containerd.conf.d/02-containerd.conf"]|' -i /etc/containerd/config.toml"
	I0601 11:07:05.439234  244383 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc/containerd/containerd.conf.d && printf %!s(MISSING) "dmVyc2lvbiA9IDIK" | base64 -d | sudo tee /etc/containerd/containerd.conf.d/02-containerd.conf"
	I0601 11:07:05.451481  244383 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0601 11:07:05.457796  244383 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0601 11:07:05.464201  244383 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0601 11:07:05.540478  244383 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0601 11:07:05.650499  244383 start.go:447] Will wait 60s for socket path /run/containerd/containerd.sock
	I0601 11:07:05.650567  244383 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0601 11:07:05.654052  244383 start.go:468] Will wait 60s for crictl version
	I0601 11:07:05.654103  244383 ssh_runner.go:195] Run: sudo crictl version
	I0601 11:07:05.681128  244383 start.go:477] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.6.4
	RuntimeApiVersion:  v1alpha2
	I0601 11:07:05.681188  244383 ssh_runner.go:195] Run: containerd --version
	I0601 11:07:05.710828  244383 ssh_runner.go:195] Run: containerd --version
	I0601 11:07:05.741779  244383 out.go:177] * Preparing Kubernetes v1.23.6 on containerd 1.6.4 ...
	I0601 11:07:05.743207  244383 cli_runner.go:164] Run: docker network inspect default-k8s-different-port-20220601110654-6708 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0601 11:07:05.773719  244383 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0601 11:07:05.777293  244383 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0601 11:07:05.788623  244383 out.go:177]   - kubelet.cni-conf-dir=/etc/cni/net.mk
	I0601 11:07:05.790049  244383 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime containerd
	I0601 11:07:05.790117  244383 ssh_runner.go:195] Run: sudo crictl images --output json
	I0601 11:07:05.812809  244383 containerd.go:547] all images are preloaded for containerd runtime.
	I0601 11:07:05.812831  244383 containerd.go:461] Images already preloaded, skipping extraction
	I0601 11:07:05.812869  244383 ssh_runner.go:195] Run: sudo crictl images --output json
	I0601 11:07:05.834860  244383 containerd.go:547] all images are preloaded for containerd runtime.
	I0601 11:07:05.834879  244383 cache_images.go:84] Images are preloaded, skipping loading
	I0601 11:07:05.834947  244383 ssh_runner.go:195] Run: sudo crictl info
	I0601 11:07:05.857173  244383 cni.go:95] Creating CNI manager for ""
	I0601 11:07:05.857192  244383 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0601 11:07:05.857218  244383 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0601 11:07:05.857235  244383 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8444 KubernetesVersion:v1.23.6 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-different-port-20220601110654-6708 NodeName:default-k8s-different-port-20220601110654-6708 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.49.2
CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0601 11:07:05.857383  244383 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "default-k8s-different-port-20220601110654-6708"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.23.6
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0601 11:07:05.857471  244383 kubeadm.go:961] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.23.6/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cni-conf-dir=/etc/cni/net.mk --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=default-k8s-different-port-20220601110654-6708 --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.49.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.23.6 ClusterName:default-k8s-different-port-20220601110654-6708 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:}
	I0601 11:07:05.857530  244383 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.23.6
	I0601 11:07:05.864412  244383 binaries.go:44] Found k8s binaries, skipping transfer
	I0601 11:07:05.864485  244383 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0601 11:07:05.870921  244383 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (591 bytes)
	I0601 11:07:05.883133  244383 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0601 11:07:05.896240  244383 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2075 bytes)
	I0601 11:07:05.908996  244383 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0601 11:07:05.911816  244383 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0601 11:07:05.920740  244383 certs.go:54] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/default-k8s-different-port-20220601110654-6708 for IP: 192.168.49.2
	I0601 11:07:05.920863  244383 certs.go:182] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.key
	I0601 11:07:05.920906  244383 certs.go:182] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/proxy-client-ca.key
	I0601 11:07:05.920964  244383 certs.go:302] generating minikube-user signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/default-k8s-different-port-20220601110654-6708/client.key
	I0601 11:07:05.920984  244383 crypto.go:68] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/default-k8s-different-port-20220601110654-6708/client.crt with IP's: []
	I0601 11:07:06.190511  244383 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/default-k8s-different-port-20220601110654-6708/client.crt ...
	I0601 11:07:06.190541  244383 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/default-k8s-different-port-20220601110654-6708/client.crt: {Name:mk1f0de9f338c1565864d345295f211cd6b42042 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0601 11:07:06.190751  244383 crypto.go:164] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/default-k8s-different-port-20220601110654-6708/client.key ...
	I0601 11:07:06.190766  244383 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/default-k8s-different-port-20220601110654-6708/client.key: {Name:mk3abd1ec1bc2a3303283efb1d56bffeb558d491 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0601 11:07:06.190855  244383 certs.go:302] generating minikube signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/default-k8s-different-port-20220601110654-6708/apiserver.key.dd3b5fb2
	I0601 11:07:06.190870  244383 crypto.go:68] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/default-k8s-different-port-20220601110654-6708/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0601 11:07:06.411949  244383 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/default-k8s-different-port-20220601110654-6708/apiserver.crt.dd3b5fb2 ...
	I0601 11:07:06.411982  244383 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/default-k8s-different-port-20220601110654-6708/apiserver.crt.dd3b5fb2: {Name:mk21c89d2fdd1fdc207dd136def37f5d90a62bd7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0601 11:07:06.412202  244383 crypto.go:164] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/default-k8s-different-port-20220601110654-6708/apiserver.key.dd3b5fb2 ...
	I0601 11:07:06.412221  244383 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/default-k8s-different-port-20220601110654-6708/apiserver.key.dd3b5fb2: {Name:mk2f4aae6eb49e6251c3e6c8e6f0f6462f382896 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0601 11:07:06.412314  244383 certs.go:320] copying /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/default-k8s-different-port-20220601110654-6708/apiserver.crt.dd3b5fb2 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/default-k8s-different-port-20220601110654-6708/apiserver.crt
	I0601 11:07:06.412369  244383 certs.go:324] copying /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/default-k8s-different-port-20220601110654-6708/apiserver.key.dd3b5fb2 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/default-k8s-different-port-20220601110654-6708/apiserver.key
	I0601 11:07:06.412451  244383 certs.go:302] generating aggregator signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/default-k8s-different-port-20220601110654-6708/proxy-client.key
	I0601 11:07:06.412469  244383 crypto.go:68] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/default-k8s-different-port-20220601110654-6708/proxy-client.crt with IP's: []
	I0601 11:07:06.545552  244383 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/default-k8s-different-port-20220601110654-6708/proxy-client.crt ...
	I0601 11:07:06.545619  244383 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/default-k8s-different-port-20220601110654-6708/proxy-client.crt: {Name:mkee564e3149cd8be755ca3cbe99f47feac8e4a7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0601 11:07:06.545807  244383 crypto.go:164] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/default-k8s-different-port-20220601110654-6708/proxy-client.key ...
	I0601 11:07:06.545819  244383 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/default-k8s-different-port-20220601110654-6708/proxy-client.key: {Name:mk3354416a46b334b24512eafd987800637af3d8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0601 11:07:06.547104  244383 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/6708.pem (1338 bytes)
	W0601 11:07:06.547148  244383 certs.go:384] ignoring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/6708_empty.pem, impossibly tiny 0 bytes
	I0601 11:07:06.547174  244383 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca-key.pem (1679 bytes)
	I0601 11:07:06.547194  244383 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca.pem (1082 bytes)
	I0601 11:07:06.547234  244383 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/cert.pem (1123 bytes)
	I0601 11:07:06.547271  244383 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/key.pem (1679 bytes)
	I0601 11:07:06.547327  244383 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files/etc/ssl/certs/67082.pem (1708 bytes)
	I0601 11:07:06.547961  244383 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/default-k8s-different-port-20220601110654-6708/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0601 11:07:06.565921  244383 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/default-k8s-different-port-20220601110654-6708/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0601 11:07:06.584089  244383 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/default-k8s-different-port-20220601110654-6708/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0601 11:07:06.601191  244383 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/default-k8s-different-port-20220601110654-6708/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0601 11:07:06.618465  244383 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0601 11:07:06.635815  244383 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0601 11:07:06.653212  244383 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0601 11:07:06.670886  244383 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0601 11:07:06.687801  244383 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/6708.pem --> /usr/share/ca-certificates/6708.pem (1338 bytes)
	I0601 11:07:06.704953  244383 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files/etc/ssl/certs/67082.pem --> /usr/share/ca-certificates/67082.pem (1708 bytes)
	I0601 11:07:06.721444  244383 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0601 11:07:06.737875  244383 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0601 11:07:06.751738  244383 ssh_runner.go:195] Run: openssl version
	I0601 11:07:06.756719  244383 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/67082.pem && ln -fs /usr/share/ca-certificates/67082.pem /etc/ssl/certs/67082.pem"
	I0601 11:07:06.764146  244383 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/67082.pem
	I0601 11:07:06.767163  244383 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Jun  1 10:24 /usr/share/ca-certificates/67082.pem
	I0601 11:07:06.767216  244383 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/67082.pem
	I0601 11:07:06.771914  244383 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/67082.pem /etc/ssl/certs/3ec20f2e.0"
	I0601 11:07:06.778934  244383 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0601 11:07:06.786568  244383 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0601 11:07:06.789545  244383 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Jun  1 10:20 /usr/share/ca-certificates/minikubeCA.pem
	I0601 11:07:06.789607  244383 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0601 11:07:06.794248  244383 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0601 11:07:06.801364  244383 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6708.pem && ln -fs /usr/share/ca-certificates/6708.pem /etc/ssl/certs/6708.pem"
	I0601 11:07:06.808247  244383 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6708.pem
	I0601 11:07:06.811196  244383 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Jun  1 10:24 /usr/share/ca-certificates/6708.pem
	I0601 11:07:06.811252  244383 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6708.pem
	I0601 11:07:06.816241  244383 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6708.pem /etc/ssl/certs/51391683.0"
	I0601 11:07:06.823684  244383 kubeadm.go:395] StartCluster: {Name:default-k8s-different-port-20220601110654-6708 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:default-k8s-different-port-20220601110654-6708
Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8444 KubernetesVersion:v1.23.6 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mount
IP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0601 11:07:06.823768  244383 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0601 11:07:06.823809  244383 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0601 11:07:06.847418  244383 cri.go:87] found id: ""
	I0601 11:07:06.847481  244383 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0601 11:07:06.854612  244383 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0601 11:07:06.861596  244383 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0601 11:07:06.861652  244383 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0601 11:07:06.868516  244383 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0601 11:07:06.868568  244383 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0601 11:07:03.668636  232046 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:07:06.167338  232046 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:07:07.121183  244383 out.go:204]   - Generating certificates and keys ...
	I0601 11:07:09.218861  244383 out.go:204]   - Booting up control plane ...
	I0601 11:07:08.167714  232046 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:07:10.168162  232046 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:07:12.667278  232046 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:07:14.668246  232046 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:07:17.168197  232046 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:07:21.259795  244383 out.go:204]   - Configuring RBAC rules ...
	I0601 11:07:21.672636  244383 cni.go:95] Creating CNI manager for ""
	I0601 11:07:21.672654  244383 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0601 11:07:21.674533  244383 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0601 11:07:19.668390  232046 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:07:21.668490  232046 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:07:21.675845  244383 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0601 11:07:21.679515  244383 cni.go:189] applying CNI manifest using /var/lib/minikube/binaries/v1.23.6/kubectl ...
	I0601 11:07:21.679534  244383 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0601 11:07:21.692464  244383 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0601 11:07:22.465311  244383 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0601 11:07:22.465382  244383 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:07:22.465395  244383 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl label nodes minikube.k8s.io/version=v1.26.0-beta.1 minikube.k8s.io/commit=4a356b2b7b41c6be3e1e342298908c27bb98ce92 minikube.k8s.io/name=default-k8s-different-port-20220601110654-6708 minikube.k8s.io/updated_at=2022_06_01T11_07_22_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:07:22.521244  244383 ops.go:34] apiserver oom_adj: -16
	I0601 11:07:22.521263  244383 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:07:23.109047  244383 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:07:23.609743  244383 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:07:24.109036  244383 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:07:24.609779  244383 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:07:24.167646  232046 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:07:26.168090  232046 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:07:25.109823  244383 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:07:25.609061  244383 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:07:26.108863  244383 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:07:26.608780  244383 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:07:27.109061  244383 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:07:27.609116  244383 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:07:28.109699  244383 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:07:28.609047  244383 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:07:29.109170  244383 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:07:29.608851  244383 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:07:28.667871  232046 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:07:30.668198  232046 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:07:30.109055  244383 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:07:30.608852  244383 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:07:31.109521  244383 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:07:31.609057  244383 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:07:32.108853  244383 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:07:32.609531  244383 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:07:33.108838  244383 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:07:33.608822  244383 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:07:34.108973  244383 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:07:34.609839  244383 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:07:34.671502  244383 kubeadm.go:1045] duration metric: took 12.206180961s to wait for elevateKubeSystemPrivileges.
	I0601 11:07:34.671537  244383 kubeadm.go:397] StartCluster complete in 27.847858486s
	I0601 11:07:34.671557  244383 settings.go:142] acquiring lock: {Name:mk20a847233fa50399ab0a24280bffb8d8dbd41b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0601 11:07:34.671645  244383 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig
	I0601 11:07:34.673551  244383 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig: {Name:mkb0e16236e54fbc8651999f1dd70854c53de7db Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0601 11:07:35.189278  244383 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "default-k8s-different-port-20220601110654-6708" rescaled to 1
	I0601 11:07:35.189337  244383 start.go:208] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8444 KubernetesVersion:v1.23.6 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0601 11:07:35.191451  244383 out.go:177] * Verifying Kubernetes components...
	I0601 11:07:35.189391  244383 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0601 11:07:35.189390  244383 addons.go:415] enableAddons start: toEnable=map[], additional=[]
	I0601 11:07:35.189576  244383 config.go:178] Loaded profile config "default-k8s-different-port-20220601110654-6708": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.6
	I0601 11:07:35.192926  244383 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0601 11:07:35.192990  244383 addons.go:65] Setting storage-provisioner=true in profile "default-k8s-different-port-20220601110654-6708"
	I0601 11:07:35.193023  244383 addons.go:65] Setting default-storageclass=true in profile "default-k8s-different-port-20220601110654-6708"
	I0601 11:07:35.193071  244383 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-different-port-20220601110654-6708"
	I0601 11:07:35.193025  244383 addons.go:153] Setting addon storage-provisioner=true in "default-k8s-different-port-20220601110654-6708"
	W0601 11:07:35.193134  244383 addons.go:165] addon storage-provisioner should already be in state true
	I0601 11:07:35.193178  244383 host.go:66] Checking if "default-k8s-different-port-20220601110654-6708" exists ...
	I0601 11:07:35.193498  244383 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220601110654-6708 --format={{.State.Status}}
	I0601 11:07:35.193681  244383 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220601110654-6708 --format={{.State.Status}}
	I0601 11:07:35.209430  244383 node_ready.go:35] waiting up to 6m0s for node "default-k8s-different-port-20220601110654-6708" to be "Ready" ...
	I0601 11:07:35.237918  244383 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0601 11:07:35.239410  244383 addons.go:348] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0601 11:07:35.239425  244383 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0601 11:07:35.239470  244383 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220601110654-6708
	I0601 11:07:35.255735  244383 addons.go:153] Setting addon default-storageclass=true in "default-k8s-different-port-20220601110654-6708"
	W0601 11:07:35.255765  244383 addons.go:165] addon default-storageclass should already be in state true
	I0601 11:07:35.255799  244383 host.go:66] Checking if "default-k8s-different-port-20220601110654-6708" exists ...
	I0601 11:07:35.256352  244383 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220601110654-6708 --format={{.State.Status}}
	I0601 11:07:35.277557  244383 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49417 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/default-k8s-different-port-20220601110654-6708/id_rsa Username:docker}
	I0601 11:07:35.290858  244383 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0601 11:07:35.296059  244383 addons.go:348] installing /etc/kubernetes/addons/storageclass.yaml
	I0601 11:07:35.296086  244383 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0601 11:07:35.296137  244383 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220601110654-6708
	I0601 11:07:35.338006  244383 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49417 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/default-k8s-different-port-20220601110654-6708/id_rsa Username:docker}
	I0601 11:07:35.376722  244383 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0601 11:07:35.468185  244383 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0601 11:07:35.653594  244383 start.go:806] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS
	I0601 11:07:35.783515  244383 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0601 11:07:33.167882  232046 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:07:35.168161  232046 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:07:37.168296  232046 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:07:35.784841  244383 addons.go:417] enableAddons completed in 595.455746ms
	I0601 11:07:37.216016  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:07:39.667840  232046 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:07:42.167654  232046 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:07:39.717025  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:07:42.216640  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:07:44.667876  232046 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:07:47.168006  232046 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:07:44.716894  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:07:47.216117  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:07:49.217067  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:07:49.168183  232046 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:07:51.667932  232046 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:07:51.716491  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:07:54.216277  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:07:54.167913  232046 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:07:56.167953  232046 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:07:56.216761  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:07:58.717105  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:07:58.168275  232046 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:08:00.668037  232046 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:08:01.216388  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:08:03.716389  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:08:03.167969  232046 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:08:05.667837  232046 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:08:08.168013  232046 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:08:08.670567  232046 node_ready.go:38] duration metric: took 4m0.010022239s waiting for node "embed-certs-20220601110327-6708" to be "Ready" ...
	I0601 11:08:08.673338  232046 out.go:177] 
	W0601 11:08:08.675576  232046 out.go:239] X Exiting due to GUEST_START: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: timed out waiting for the condition
	W0601 11:08:08.675599  232046 out.go:239] * 
	W0601 11:08:08.676630  232046 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0601 11:08:08.678476  232046 out.go:177] 
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID
	44a64d6574af4       6de166512aa22       About a minute ago   Running             kindnet-cni               1                   20ed2db10bff6
	303244519eacb       6de166512aa22       4 minutes ago        Exited              kindnet-cni               0                   20ed2db10bff6
	4a8be0c7cfc53       4c03754524064       4 minutes ago        Running             kube-proxy                0                   043df8eb6f8fb
	d49ab0e8a34f4       8fa62c12256df       4 minutes ago        Running             kube-apiserver            0                   31f96fd01399b
	c32cb0a91408a       df7b72818ad2e       4 minutes ago        Running             kube-controller-manager   0                   87ef42c5de136
	a985029383eb2       595f327f224a4       4 minutes ago        Running             kube-scheduler            0                   a4a80ab623aae
	b8dd730d917c4       25f8c7f3da61c       4 minutes ago        Running             etcd                      0                   dfde8cf669db7
	
	* 
	* ==> containerd <==
	* -- Logs begin at Wed 2022-06-01 11:03:36 UTC, end at Wed 2022-06-01 11:08:09 UTC. --
	Jun 01 11:04:08 embed-certs-20220601110327-6708 containerd[516]: time="2022-06-01T11:04:08.477041027Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 01 11:04:08 embed-certs-20220601110327-6708 containerd[516]: time="2022-06-01T11:04:08.477055361Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 01 11:04:08 embed-certs-20220601110327-6708 containerd[516]: time="2022-06-01T11:04:08.477251030Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/20ed2db10bff6252ad2001c172710e70a53dd349d97b8a17babf3a47f9171c43 pid=1701 runtime=io.containerd.runc.v2
	Jun 01 11:04:08 embed-certs-20220601110327-6708 containerd[516]: time="2022-06-01T11:04:08.478177653Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 01 11:04:08 embed-certs-20220601110327-6708 containerd[516]: time="2022-06-01T11:04:08.478260413Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 01 11:04:08 embed-certs-20220601110327-6708 containerd[516]: time="2022-06-01T11:04:08.478273351Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 01 11:04:08 embed-certs-20220601110327-6708 containerd[516]: time="2022-06-01T11:04:08.478583207Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/043df8eb6f8fb58b7bfeda2ebd8c0b1643bb3c7ee61029aa0178cf65b272a4c9 pid=1709 runtime=io.containerd.runc.v2
	Jun 01 11:04:08 embed-certs-20220601110327-6708 containerd[516]: time="2022-06-01T11:04:08.522956742Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-99lsz,Uid:c2f232c6-4807-4bcf-a1ca-c39489a0112a,Namespace:kube-system,Attempt:0,} returns sandbox id \"043df8eb6f8fb58b7bfeda2ebd8c0b1643bb3c7ee61029aa0178cf65b272a4c9\""
	Jun 01 11:04:08 embed-certs-20220601110327-6708 containerd[516]: time="2022-06-01T11:04:08.525750711Z" level=info msg="CreateContainer within sandbox \"043df8eb6f8fb58b7bfeda2ebd8c0b1643bb3c7ee61029aa0178cf65b272a4c9\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}"
	Jun 01 11:04:08 embed-certs-20220601110327-6708 containerd[516]: time="2022-06-01T11:04:08.545999760Z" level=info msg="CreateContainer within sandbox \"043df8eb6f8fb58b7bfeda2ebd8c0b1643bb3c7ee61029aa0178cf65b272a4c9\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"4a8be0c7cfc5357e854ba7e1f8d7f90fdbacf08e294eb70b849c4501eb4b32b6\""
	Jun 01 11:04:08 embed-certs-20220601110327-6708 containerd[516]: time="2022-06-01T11:04:08.546757523Z" level=info msg="StartContainer for \"4a8be0c7cfc5357e854ba7e1f8d7f90fdbacf08e294eb70b849c4501eb4b32b6\""
	Jun 01 11:04:08 embed-certs-20220601110327-6708 containerd[516]: time="2022-06-01T11:04:08.675224779Z" level=info msg="StartContainer for \"4a8be0c7cfc5357e854ba7e1f8d7f90fdbacf08e294eb70b849c4501eb4b32b6\" returns successfully"
	Jun 01 11:04:08 embed-certs-20220601110327-6708 containerd[516]: time="2022-06-01T11:04:08.758708055Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kindnet-92tfl,Uid:1e2e52a8-4f89-49af-9741-f79384628a29,Namespace:kube-system,Attempt:0,} returns sandbox id \"20ed2db10bff6252ad2001c172710e70a53dd349d97b8a17babf3a47f9171c43\""
	Jun 01 11:04:08 embed-certs-20220601110327-6708 containerd[516]: time="2022-06-01T11:04:08.761622408Z" level=info msg="CreateContainer within sandbox \"20ed2db10bff6252ad2001c172710e70a53dd349d97b8a17babf3a47f9171c43\" for container &ContainerMetadata{Name:kindnet-cni,Attempt:0,}"
	Jun 01 11:04:08 embed-certs-20220601110327-6708 containerd[516]: time="2022-06-01T11:04:08.777685274Z" level=info msg="CreateContainer within sandbox \"20ed2db10bff6252ad2001c172710e70a53dd349d97b8a17babf3a47f9171c43\" for &ContainerMetadata{Name:kindnet-cni,Attempt:0,} returns container id \"303244519eacb93040778925202eb35640233defc4ec16bdee987993557c7494\""
	Jun 01 11:04:08 embed-certs-20220601110327-6708 containerd[516]: time="2022-06-01T11:04:08.778859767Z" level=info msg="StartContainer for \"303244519eacb93040778925202eb35640233defc4ec16bdee987993557c7494\""
	Jun 01 11:04:09 embed-certs-20220601110327-6708 containerd[516]: time="2022-06-01T11:04:09.158910291Z" level=info msg="StartContainer for \"303244519eacb93040778925202eb35640233defc4ec16bdee987993557c7494\" returns successfully"
	Jun 01 11:06:50 embed-certs-20220601110327-6708 containerd[516]: time="2022-06-01T11:06:50.430470354Z" level=info msg="shim disconnected" id=303244519eacb93040778925202eb35640233defc4ec16bdee987993557c7494
	Jun 01 11:06:50 embed-certs-20220601110327-6708 containerd[516]: time="2022-06-01T11:06:50.430539272Z" level=warning msg="cleaning up after shim disconnected" id=303244519eacb93040778925202eb35640233defc4ec16bdee987993557c7494 namespace=k8s.io
	Jun 01 11:06:50 embed-certs-20220601110327-6708 containerd[516]: time="2022-06-01T11:06:50.430556971Z" level=info msg="cleaning up dead shim"
	Jun 01 11:06:50 embed-certs-20220601110327-6708 containerd[516]: time="2022-06-01T11:06:50.439851127Z" level=warning msg="cleanup warnings time=\"2022-06-01T11:06:50Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2068 runtime=io.containerd.runc.v2\n"
	Jun 01 11:06:51 embed-certs-20220601110327-6708 containerd[516]: time="2022-06-01T11:06:51.047418788Z" level=info msg="CreateContainer within sandbox \"20ed2db10bff6252ad2001c172710e70a53dd349d97b8a17babf3a47f9171c43\" for container &ContainerMetadata{Name:kindnet-cni,Attempt:1,}"
	Jun 01 11:06:51 embed-certs-20220601110327-6708 containerd[516]: time="2022-06-01T11:06:51.061444732Z" level=info msg="CreateContainer within sandbox \"20ed2db10bff6252ad2001c172710e70a53dd349d97b8a17babf3a47f9171c43\" for &ContainerMetadata{Name:kindnet-cni,Attempt:1,} returns container id \"44a64d6574af41b7959f71d1dab2a88484c78c34edb54a7a824ddd43a44b981e\""
	Jun 01 11:06:51 embed-certs-20220601110327-6708 containerd[516]: time="2022-06-01T11:06:51.064511450Z" level=info msg="StartContainer for \"44a64d6574af41b7959f71d1dab2a88484c78c34edb54a7a824ddd43a44b981e\""
	Jun 01 11:06:51 embed-certs-20220601110327-6708 containerd[516]: time="2022-06-01T11:06:51.257716620Z" level=info msg="StartContainer for \"44a64d6574af41b7959f71d1dab2a88484c78c34edb54a7a824ddd43a44b981e\" returns successfully"
	
	* 
	* ==> describe nodes <==
	* Name:               embed-certs-20220601110327-6708
	Roles:              control-plane,master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-20220601110327-6708
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=4a356b2b7b41c6be3e1e342298908c27bb98ce92
	                    minikube.k8s.io/name=embed-certs-20220601110327-6708
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2022_06_01T11_03_56_0700
	                    minikube.k8s.io/version=v1.26.0-beta.1
	                    node-role.kubernetes.io/control-plane=
	                    node-role.kubernetes.io/master=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 01 Jun 2022 11:03:52 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-20220601110327-6708
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 01 Jun 2022 11:08:04 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 01 Jun 2022 11:04:07 +0000   Wed, 01 Jun 2022 11:03:49 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 01 Jun 2022 11:04:07 +0000   Wed, 01 Jun 2022 11:03:49 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 01 Jun 2022 11:04:07 +0000   Wed, 01 Jun 2022 11:03:49 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Wed, 01 Jun 2022 11:04:07 +0000   Wed, 01 Jun 2022 11:03:49 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    embed-certs-20220601110327-6708
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304695084Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32873824Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304695084Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32873824Ki
	  pods:               110
	System Info:
	  Machine ID:                 e0d7477b601740b2a7c32c13851e505c
	  System UUID:                d600b159-ea34-4ea3-ab62-e86c595f06ef
	  Boot ID:                    6ec46557-2d76-4d83-8353-3ee04fd961c4
	  Kernel Version:             5.13.0-1027-gcp
	  OS Image:                   Ubuntu 20.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.6.4
	  Kubelet Version:            v1.23.6
	  Kube-Proxy Version:         v1.23.6
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                       ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-embed-certs-20220601110327-6708                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (0%!)(MISSING)       0 (0%!)(MISSING)         4m9s
	  kube-system                 kindnet-92tfl                                              100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      4m1s
	  kube-system                 kube-apiserver-embed-certs-20220601110327-6708             250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m9s
	  kube-system                 kube-controller-manager-embed-certs-20220601110327-6708    200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m8s
	  kube-system                 kube-proxy-99lsz                                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m1s
	  kube-system                 kube-scheduler-embed-certs-20220601110327-6708             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m9s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%!)(MISSING)   100m (1%!)(MISSING)
	  memory             150Mi (0%!)(MISSING)  50Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  Starting                 4m                     kube-proxy  
	  Normal  NodeHasSufficientMemory  4m21s (x5 over 4m21s)  kubelet     Node embed-certs-20220601110327-6708 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m21s (x4 over 4m21s)  kubelet     Node embed-certs-20220601110327-6708 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m21s (x4 over 4m21s)  kubelet     Node embed-certs-20220601110327-6708 status is now: NodeHasSufficientPID
	  Normal  Starting                 4m9s                   kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m9s                   kubelet     Node embed-certs-20220601110327-6708 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m9s                   kubelet     Node embed-certs-20220601110327-6708 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m9s                   kubelet     Node embed-certs-20220601110327-6708 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m9s                   kubelet     Updated Node Allocatable limit across pods
	
	* 
	* ==> dmesg <==
	* [Jun 1 10:57] IPv4: martian source 10.244.0.2 from 10.244.0.2, on dev bridge
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff ce 23 50 3b c9 fd 08 06
	[  +0.595787] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ce 23 50 3b c9 fd 08 06
	[  +0.586201] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 66 8a d2 ee 2a 9a 08 06
	[Jun 1 10:58] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 9a c7 83 3b 8c 1d 08 06
	[  +0.000302] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 66 8a d2 ee 2a 9a 08 06
	[ +12.725641] IPv4: martian source 10.244.0.2 from 10.244.0.2, on dev bridge
	[  +0.000013] ll header: 00000000: ff ff ff ff ff ff 3a 82 da d0 8c 8d 08 06
	[  +0.906380] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 3a 82 da d0 8c 8d 08 06
	[  +0.302806] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff aa d1 05 1a ef 66 08 06
	[ +13.606547] IPv4: martian source 10.85.0.2 from 10.85.0.2, on dev cni0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 7e ad f3 21 27 5b 08 06
	[  +8.626375] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff f6 c1 ef be 01 dd 08 06
	[  +0.000348] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff aa d1 05 1a ef 66 08 06
	[  +8.278909] IPv4: martian source 10.85.0.2 from 10.85.0.2, on dev cni0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 72 fb cd eb 09 11 08 06
	[Jun 1 11:01] IPv4: martian destination 127.0.0.11 from 10.244.0.2, dev vethb04b9442
	
	* 
	* ==> etcd [b8dd730d917c46f1998a84680d8f66c7fe1a671f92381a72b93c7e59652409f2] <==
	* {"level":"info","ts":"2022-06-01T11:03:49.457Z","caller":"embed/etcd.go:580","msg":"serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2022-06-01T11:03:49.457Z","caller":"embed/etcd.go:552","msg":"cmux::serve","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2022-06-01T11:03:49.457Z","caller":"embed/etcd.go:276","msg":"now serving peer/client/metrics","local-member-id":"ea7e25599daad906","initial-advertise-peer-urls":["https://192.168.76.2:2380"],"listen-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.76.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2022-06-01T11:03:49.457Z","caller":"embed/etcd.go:762","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2022-06-01T11:03:50.084Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 is starting a new election at term 1"}
	{"level":"info","ts":"2022-06-01T11:03:50.084Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became pre-candidate at term 1"}
	{"level":"info","ts":"2022-06-01T11:03:50.084Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 1"}
	{"level":"info","ts":"2022-06-01T11:03:50.084Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became candidate at term 2"}
	{"level":"info","ts":"2022-06-01T11:03:50.084Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2022-06-01T11:03:50.084Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became leader at term 2"}
	{"level":"info","ts":"2022-06-01T11:03:50.084Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2022-06-01T11:03:50.084Z","caller":"etcdserver/server.go:2027","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:embed-certs-20220601110327-6708 ClientURLs:[https://192.168.76.2:2379]}","request-path":"/0/members/ea7e25599daad906/attributes","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2022-06-01T11:03:50.084Z","caller":"etcdserver/server.go:2476","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2022-06-01T11:03:50.084Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-06-01T11:03:50.084Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-06-01T11:03:50.085Z","caller":"etcdmain/main.go:47","msg":"notifying init daemon"}
	{"level":"info","ts":"2022-06-01T11:03:50.085Z","caller":"etcdmain/main.go:53","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2022-06-01T11:03:50.085Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","cluster-version":"3.5"}
	{"level":"info","ts":"2022-06-01T11:03:50.085Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2022-06-01T11:03:50.085Z","caller":"etcdserver/server.go:2500","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2022-06-01T11:03:50.086Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.76.2:2379"}
	{"level":"info","ts":"2022-06-01T11:03:50.086Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2022-06-01T11:06:59.331Z","caller":"traceutil/trace.go:171","msg":"trace[993403062] linearizableReadLoop","detail":"{readStateIndex:565; appliedIndex:565; }","duration":"164.749443ms","start":"2022-06-01T11:06:59.166Z","end":"2022-06-01T11:06:59.331Z","steps":["trace[993403062] 'read index received'  (duration: 164.741295ms)","trace[993403062] 'applied index is now lower than readState.Index'  (duration: 7.261µs)"],"step_count":2}
	{"level":"warn","ts":"2022-06-01T11:06:59.332Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"166.049774ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/embed-certs-20220601110327-6708\" ","response":"range_response_count:1 size:4776"}
	{"level":"info","ts":"2022-06-01T11:06:59.332Z","caller":"traceutil/trace.go:171","msg":"trace[243859244] range","detail":"{range_begin:/registry/minions/embed-certs-20220601110327-6708; range_end:; response_count:1; response_revision:516; }","duration":"166.144768ms","start":"2022-06-01T11:06:59.166Z","end":"2022-06-01T11:06:59.332Z","steps":["trace[243859244] 'agreement among raft nodes before linearized reading'  (duration: 164.864212ms)"],"step_count":1}
	
	* 
	* ==> kernel <==
	*  11:08:09 up 50 min,  0 users,  load average: 1.19, 1.62, 1.82
	Linux embed-certs-20220601110327-6708 5.13.0-1027-gcp #32~20.04.1-Ubuntu SMP Thu May 26 10:53:08 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.4 LTS"
	
	* 
	* ==> kube-apiserver [d49ab0e8a34f4fc8f9cd1e7a8cba837be3548801379cde0d444aadf2a833b32a] <==
	* I0601 11:03:52.353146       1 shared_informer.go:247] Caches are synced for crd-autoregister 
	I0601 11:03:52.353180       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0601 11:03:52.353221       1 shared_informer.go:247] Caches are synced for cluster_authentication_trust_controller 
	I0601 11:03:52.353225       1 cache.go:39] Caches are synced for autoregister controller
	I0601 11:03:52.353496       1 apf_controller.go:322] Running API Priority and Fairness config worker
	I0601 11:03:52.354371       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0601 11:03:53.223928       1 controller.go:132] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I0601 11:03:53.230007       1 storage_scheduling.go:93] created PriorityClass system-node-critical with value 2000001000
	I0601 11:03:53.232751       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0601 11:03:53.233028       1 storage_scheduling.go:93] created PriorityClass system-cluster-critical with value 2000000000
	I0601 11:03:53.233046       1 storage_scheduling.go:109] all system priority classes are created successfully or already exist.
	I0601 11:03:53.654795       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0601 11:03:53.685311       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0601 11:03:53.775744       1 alloc.go:329] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1]
	W0601 11:03:53.783710       1 lease.go:233] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I0601 11:03:53.784644       1 controller.go:611] quota admission added evaluator for: endpoints
	I0601 11:03:53.788004       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0601 11:03:54.362824       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0601 11:03:55.411558       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0601 11:03:55.418495       1 alloc.go:329] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I0601 11:03:55.427653       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0601 11:04:00.570330       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0601 11:04:08.019838       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	I0601 11:04:08.117524       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	I0601 11:04:08.961758       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	
	* 
	* ==> kube-controller-manager [c32cb0a91408a09b3c11ff34a26cefeb792698bb4386b00d3bf469632b43a1d0] <==
	* I0601 11:04:07.415430       1 shared_informer.go:247] Caches are synced for ephemeral 
	I0601 11:04:07.415463       1 shared_informer.go:247] Caches are synced for endpoint_slice 
	I0601 11:04:07.418512       1 shared_informer.go:247] Caches are synced for resource quota 
	I0601 11:04:07.461867       1 shared_informer.go:247] Caches are synced for stateful set 
	I0601 11:04:07.464497       1 shared_informer.go:247] Caches are synced for taint 
	I0601 11:04:07.464573       1 node_lifecycle_controller.go:1397] Initializing eviction metric for zone: 
	I0601 11:04:07.464636       1 taint_manager.go:187] "Starting NoExecuteTaintManager"
	I0601 11:04:07.464710       1 event.go:294] "Event occurred" object="embed-certs-20220601110327-6708" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node embed-certs-20220601110327-6708 event: Registered Node embed-certs-20220601110327-6708 in Controller"
	W0601 11:04:07.464641       1 node_lifecycle_controller.go:1012] Missing timestamp for Node embed-certs-20220601110327-6708. Assuming now as a timestamp.
	I0601 11:04:07.464736       1 shared_informer.go:247] Caches are synced for GC 
	I0601 11:04:07.464789       1 node_lifecycle_controller.go:1163] Controller detected that all Nodes are not-Ready. Entering master disruption mode.
	I0601 11:04:07.464794       1 shared_informer.go:247] Caches are synced for ReplicationController 
	I0601 11:04:07.465561       1 shared_informer.go:247] Caches are synced for attach detach 
	I0601 11:04:07.466207       1 shared_informer.go:247] Caches are synced for daemon sets 
	I0601 11:04:07.466846       1 shared_informer.go:247] Caches are synced for ReplicaSet 
	I0601 11:04:07.844776       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0601 11:04:07.860043       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0601 11:04:07.860076       1 garbagecollector.go:155] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0601 11:04:08.021708       1 event.go:294] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-64897985d to 2"
	I0601 11:04:08.045076       1 event.go:294] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-64897985d to 1"
	I0601 11:04:08.122691       1 event.go:294] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-99lsz"
	I0601 11:04:08.125091       1 event.go:294] "Event occurred" object="kube-system/kindnet" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-92tfl"
	I0601 11:04:08.220606       1 event.go:294] "Event occurred" object="kube-system/coredns-64897985d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-64897985d-2ms6r"
	I0601 11:04:08.226533       1 event.go:294] "Event occurred" object="kube-system/coredns-64897985d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-64897985d-9dpfv"
	I0601 11:04:08.241748       1 event.go:294] "Event occurred" object="kube-system/coredns-64897985d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-64897985d-2ms6r"
	
	* 
	* ==> kube-proxy [4a8be0c7cfc5357e854ba7e1f8d7f90fdbacf08e294eb70b849c4501eb4b32b6] <==
	* I0601 11:04:08.785335       1 node.go:163] Successfully retrieved node IP: 192.168.76.2
	I0601 11:04:08.785408       1 server_others.go:138] "Detected node IP" address="192.168.76.2"
	I0601 11:04:08.785447       1 server_others.go:561] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0601 11:04:08.956676       1 server_others.go:206] "Using iptables Proxier"
	I0601 11:04:08.957522       1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0601 11:04:08.957544       1 server_others.go:214] "Creating dualStackProxier for iptables"
	I0601 11:04:08.957576       1 server_others.go:491] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
	I0601 11:04:08.958014       1 server.go:656] "Version info" version="v1.23.6"
	I0601 11:04:08.958572       1 config.go:317] "Starting service config controller"
	I0601 11:04:08.958596       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0601 11:04:08.959266       1 config.go:226] "Starting endpoint slice config controller"
	I0601 11:04:08.959287       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0601 11:04:09.058849       1 shared_informer.go:247] Caches are synced for service config 
	I0601 11:04:09.059356       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	
	* 
	* ==> kube-scheduler [a985029383eb2ce5944970221a3e1c9b33b522c0c97a0fb29c0b4c260cbf6a2f] <==
	* W0601 11:03:52.358283       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0601 11:03:52.358434       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0601 11:03:52.358594       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0601 11:03:52.358832       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0601 11:03:52.358601       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0601 11:03:52.358891       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0601 11:03:52.358710       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0601 11:03:52.358916       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0601 11:03:53.226960       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0601 11:03:53.227001       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0601 11:03:53.235048       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0601 11:03:53.235096       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0601 11:03:53.321811       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0601 11:03:53.321848       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0601 11:03:53.385122       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0601 11:03:53.385163       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0601 11:03:53.405212       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0601 11:03:53.405259       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0601 11:03:53.455747       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0601 11:03:53.455790       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0601 11:03:53.455746       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0601 11:03:53.455816       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0601 11:03:53.557775       1 reflector.go:324] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0601 11:03:53.557818       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0601 11:03:55.783702       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Wed 2022-06-01 11:03:36 UTC, end at Wed 2022-06-01 11:08:10 UTC. --
	Jun 01 11:06:10 embed-certs-20220601110327-6708 kubelet[1320]: E0601 11:06:10.811415    1320 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jun 01 11:06:15 embed-certs-20220601110327-6708 kubelet[1320]: E0601 11:06:15.812835    1320 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jun 01 11:06:20 embed-certs-20220601110327-6708 kubelet[1320]: E0601 11:06:20.813863    1320 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jun 01 11:06:25 embed-certs-20220601110327-6708 kubelet[1320]: E0601 11:06:25.814541    1320 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jun 01 11:06:30 embed-certs-20220601110327-6708 kubelet[1320]: E0601 11:06:30.815620    1320 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jun 01 11:06:35 embed-certs-20220601110327-6708 kubelet[1320]: E0601 11:06:35.816390    1320 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jun 01 11:06:40 embed-certs-20220601110327-6708 kubelet[1320]: E0601 11:06:40.818124    1320 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jun 01 11:06:45 embed-certs-20220601110327-6708 kubelet[1320]: E0601 11:06:45.819380    1320 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jun 01 11:06:50 embed-certs-20220601110327-6708 kubelet[1320]: E0601 11:06:50.820388    1320 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jun 01 11:06:51 embed-certs-20220601110327-6708 kubelet[1320]: I0601 11:06:51.045440    1320 scope.go:110] "RemoveContainer" containerID="303244519eacb93040778925202eb35640233defc4ec16bdee987993557c7494"
	Jun 01 11:06:55 embed-certs-20220601110327-6708 kubelet[1320]: E0601 11:06:55.821556    1320 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jun 01 11:07:00 embed-certs-20220601110327-6708 kubelet[1320]: E0601 11:07:00.822365    1320 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jun 01 11:07:05 embed-certs-20220601110327-6708 kubelet[1320]: E0601 11:07:05.824133    1320 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jun 01 11:07:10 embed-certs-20220601110327-6708 kubelet[1320]: E0601 11:07:10.825269    1320 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jun 01 11:07:15 embed-certs-20220601110327-6708 kubelet[1320]: E0601 11:07:15.825911    1320 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jun 01 11:07:20 embed-certs-20220601110327-6708 kubelet[1320]: E0601 11:07:20.827245    1320 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jun 01 11:07:25 embed-certs-20220601110327-6708 kubelet[1320]: E0601 11:07:25.828301    1320 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jun 01 11:07:30 embed-certs-20220601110327-6708 kubelet[1320]: E0601 11:07:30.829274    1320 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jun 01 11:07:35 embed-certs-20220601110327-6708 kubelet[1320]: E0601 11:07:35.830197    1320 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jun 01 11:07:40 embed-certs-20220601110327-6708 kubelet[1320]: E0601 11:07:40.831312    1320 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jun 01 11:07:45 embed-certs-20220601110327-6708 kubelet[1320]: E0601 11:07:45.832540    1320 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jun 01 11:07:50 embed-certs-20220601110327-6708 kubelet[1320]: E0601 11:07:50.833567    1320 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jun 01 11:07:55 embed-certs-20220601110327-6708 kubelet[1320]: E0601 11:07:55.834648    1320 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jun 01 11:08:00 embed-certs-20220601110327-6708 kubelet[1320]: E0601 11:08:00.835929    1320 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jun 01 11:08:05 embed-certs-20220601110327-6708 kubelet[1320]: E0601 11:08:05.836917    1320 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-20220601110327-6708 -n embed-certs-20220601110327-6708
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-20220601110327-6708 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:270: non-running pods: coredns-64897985d-9dpfv storage-provisioner
helpers_test.go:272: ======> post-mortem[TestStartStop/group/embed-certs/serial/FirstStart]: describe non-running pods <======
helpers_test.go:275: (dbg) Run:  kubectl --context embed-certs-20220601110327-6708 describe pod coredns-64897985d-9dpfv storage-provisioner
helpers_test.go:275: (dbg) Non-zero exit: kubectl --context embed-certs-20220601110327-6708 describe pod coredns-64897985d-9dpfv storage-provisioner: exit status 1 (51.188307ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-64897985d-9dpfv" not found
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:277: kubectl --context embed-certs-20220601110327-6708 describe pod coredns-64897985d-9dpfv storage-provisioner: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/FirstStart (283.31s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (484.54s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:198: (dbg) Run:  kubectl --context old-k8s-version-20220601105850-6708 create -f testdata/busybox.yaml
start_stop_delete_test.go:198: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:342: "busybox" [dede7476-c269-4541-87d9-c9dfadc9bded] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.)
E0601 11:03:55.386051    6708 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/bridge-20220601104837-6708/client.crt: no such file or directory
E0601 11:04:05.395117    6708 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/kindnet-20220601104838-6708/client.crt: no such file or directory
E0601 11:04:15.867093    6708 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/bridge-20220601104837-6708/client.crt: no such file or directory
E0601 11:04:16.575699    6708 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/enable-default-cni-20220601104837-6708/client.crt: no such file or directory
E0601 11:04:22.034382    6708 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/cilium-20220601104839-6708/client.crt: no such file or directory
E0601 11:04:22.039621    6708 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/cilium-20220601104839-6708/client.crt: no such file or directory
E0601 11:04:22.049860    6708 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/cilium-20220601104839-6708/client.crt: no such file or directory
E0601 11:04:22.070105    6708 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/cilium-20220601104839-6708/client.crt: no such file or directory
E0601 11:04:22.110348    6708 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/cilium-20220601104839-6708/client.crt: no such file or directory
E0601 11:04:22.191329    6708 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/cilium-20220601104839-6708/client.crt: no such file or directory
E0601 11:04:22.352039    6708 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/cilium-20220601104839-6708/client.crt: no such file or directory
E0601 11:04:22.672572    6708 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/cilium-20220601104839-6708/client.crt: no such file or directory
E0601 11:04:23.313526    6708 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/cilium-20220601104839-6708/client.crt: no such file or directory
E0601 11:04:24.594711    6708 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/cilium-20220601104839-6708/client.crt: no such file or directory
E0601 11:04:27.155564    6708 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/cilium-20220601104839-6708/client.crt: no such file or directory
E0601 11:04:31.087118    6708 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/ingress-addon-legacy-20220601102806-6708/client.crt: no such file or directory
E0601 11:04:32.276706    6708 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/cilium-20220601104839-6708/client.crt: no such file or directory
E0601 11:04:42.517057    6708 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/cilium-20220601104839-6708/client.crt: no such file or directory
E0601 11:04:56.827668    6708 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/bridge-20220601104837-6708/client.crt: no such file or directory
E0601 11:05:02.997638    6708 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/cilium-20220601104839-6708/client.crt: no such file or directory
E0601 11:05:24.947903    6708 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/functional-20220601102456-6708/client.crt: no such file or directory
E0601 11:05:38.496835    6708 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/enable-default-cni-20220601104837-6708/client.crt: no such file or directory
E0601 11:05:43.958706    6708 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/cilium-20220601104839-6708/client.crt: no such file or directory
E0601 11:06:18.748704    6708 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/bridge-20220601104837-6708/client.crt: no such file or directory
E0601 11:06:21.552376    6708 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/kindnet-20220601104838-6708/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:198: ***** TestStartStop/group/old-k8s-version/serial/DeployApp: pod "integration-test=busybox" failed to start within 8m0s: timed out waiting for the condition ****
start_stop_delete_test.go:198: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-20220601105850-6708 -n old-k8s-version-20220601105850-6708
start_stop_delete_test.go:198: TestStartStop/group/old-k8s-version/serial/DeployApp: showing logs for failed pods as of 2022-06-01 11:11:47.620016636 +0000 UTC m=+3108.619636882
start_stop_delete_test.go:198: (dbg) Run:  kubectl --context old-k8s-version-20220601105850-6708 describe po busybox -n default
start_stop_delete_test.go:198: (dbg) kubectl --context old-k8s-version-20220601105850-6708 describe po busybox -n default:
Name:         busybox
Namespace:    default
Priority:     0
Node:         <none>
Labels:       integration-test=busybox
Annotations:  <none>
Status:       Pending
IP:           
IPs:          <none>
Containers:
busybox:
Image:      gcr.io/k8s-minikube/busybox:1.28.4-glibc
Port:       <none>
Host Port:  <none>
Command:
sleep
3600
Environment:  <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-vdddm (ro)
Conditions:
Type           Status
PodScheduled   False 
Volumes:
default-token-vdddm:
Type:        Secret (a volume populated by a Secret)
SecretName:  default-token-vdddm
Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason            Age                    From               Message
----     ------            ----                   ----               -------
Warning  FailedScheduling  8m                     default-scheduler  0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.
Warning  FailedScheduling  5m24s (x1 over 6m54s)  default-scheduler  0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.
start_stop_delete_test.go:198: (dbg) Run:  kubectl --context old-k8s-version-20220601105850-6708 logs busybox -n default
start_stop_delete_test.go:198: (dbg) kubectl --context old-k8s-version-20220601105850-6708 logs busybox -n default:
start_stop_delete_test.go:198: wait: integration-test=busybox within 8m0s: timed out waiting for the condition
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/DeployApp]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-20220601105850-6708
helpers_test.go:235: (dbg) docker inspect old-k8s-version-20220601105850-6708:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "3b070aceb31160fc5815dbff4454d13f7f86eaa9a885ff77acd0f313e36673c0",
	        "Created": "2022-06-01T10:59:00.78565124Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 214562,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-06-01T10:59:01.206141646Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:5fc9565d342f677dd8987c0c7656d8d58147ab45c932c9076935b38a770e4cf7",
	        "ResolvConfPath": "/var/lib/docker/containers/3b070aceb31160fc5815dbff4454d13f7f86eaa9a885ff77acd0f313e36673c0/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/3b070aceb31160fc5815dbff4454d13f7f86eaa9a885ff77acd0f313e36673c0/hostname",
	        "HostsPath": "/var/lib/docker/containers/3b070aceb31160fc5815dbff4454d13f7f86eaa9a885ff77acd0f313e36673c0/hosts",
	        "LogPath": "/var/lib/docker/containers/3b070aceb31160fc5815dbff4454d13f7f86eaa9a885ff77acd0f313e36673c0/3b070aceb31160fc5815dbff4454d13f7f86eaa9a885ff77acd0f313e36673c0-json.log",
	        "Name": "/old-k8s-version-20220601105850-6708",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-20220601105850-6708:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-20220601105850-6708",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/54107ef3bba69957e2904f92616edbbee2b856262d277e35dd0855c862772266-init/diff:/var/lib/docker/overlay2/b8d8db1a2aa7e68bc30cc8784920412f67de75d287f26f41df14fe77e9cd01aa/diff:/var/lib/docker/overlay2/e05914bc392f34941f075ddf1f08984ba653e41980a3a12b3f0ec7bc66faed2c/diff:/var/lib/docker/overlay2/311290be1e68cfaf248422398f2ee037b662e83e8a80c393cf1f35af46c871e3/diff:/var/lib/docker/overlay2/df5bd86ed0e0c9740e1a389f0dd435837b730043ff123ee198aa28f82511f21c/diff:/var/lib/docker/overlay2/52ebf0baed5a1f4378d84fd6b94c9c1756fef7c7837d0766f77ef29b2104f39c/diff:/var/lib/docker/overlay2/19fdc64f1948c51309fb85761a84e7787a06f91f43e3d554c0507c5291bda802/diff:/var/lib/docker/overlay2/8fbe94a5f5ec7e7b87e3294d917aaba04c0d5a1072a4876ca3171f7b416e78d1/diff:/var/lib/docker/overlay2/3f2aec9d3b6e64c9251bd84e36639908a944a57741768984d5a66f5f34ed2288/diff:/var/lib/docker/overlay2/f4b59ae0a52d88a0325a281689c43f769408e14f70b9c94b771e70af140d7538/diff:/var/lib/docker/overlay2/1b9610
0ef86fc38fba7ce796c89f6a0f0c16c7fc1a94ff1a7f2021a01dd5471e/diff:/var/lib/docker/overlay2/10388fac28961ac0e67628b98602c8c77b04d12b21cd25a8d9adc05c1261252b/diff:/var/lib/docker/overlay2/efcc44ba0e0e6acd23e74db49106211785c2519f22b858252b43b17e927095e4/diff:/var/lib/docker/overlay2/3dbc9b9ec4e689c631fadb88ec67ad1f44f1059642f0d9e9740fa4523681476a/diff:/var/lib/docker/overlay2/196519dbdfaceb51fe77bd0517b4a726c63133e2f1a44cc71539d859f920996f/diff:/var/lib/docker/overlay2/bf269014ddb2ded291bc1f7a299a168567e8ffb016d7d4ba5ad3681eb1c988ba/diff:/var/lib/docker/overlay2/dc3a1c86dd14e2cba77aa4a9424d61aa34efc817c2c14b8f79d66fb1c19132ea/diff:/var/lib/docker/overlay2/7151cfa81a30a8e2bacf0e7e97eb21e825844e9ee99d4d774c443cf4df3042bf/diff:/var/lib/docker/overlay2/4827c7b9079e215b353e51dd539d7e28bf73341ea0a494df4654e9fd1b53d16c/diff:/var/lib/docker/overlay2/2da0eda04306aaeacd19a5cc85b885230b88e74f1791bdb022ebf4b39d85fae2/diff:/var/lib/docker/overlay2/1a0bdf8971fb0a44ff0e168623dbbb2d84b8e93f6d20d9351efd68470e4d4851/diff:/var/lib/d
ocker/overlay2/9ced6f582fc4ce00064f1f5e6ac922d838648fe94afc8e015c309e04004f10ca/diff:/var/lib/docker/overlay2/dd6d4c3166eb565aff2516953d5a8877a204214f2436d414475132ae70429cf7/diff:/var/lib/docker/overlay2/a1ace060e85891d54b26ff4be9fcbce36ffbb15cc4061eb4ccf0add8f82783df/diff:/var/lib/docker/overlay2/bc8b93bfba93e7da2c573ae8b6405ebff526153f6a8b0659aebaf044dc7e8f43/diff:/var/lib/docker/overlay2/c6292624b658b5761ddc277e4b50f1bd9d32fb9a2ad4a01647d6482fa0d29eb3/diff:/var/lib/docker/overlay2/cfe8f35eeb0a80a3c747eac0dfd9195da1fa3d9f92f0b7866d46f3517f3b10ee/diff:/var/lib/docker/overlay2/90bc3b9378b5761de7575f1a82d48e4b4ebf50af153eafa5a565585e136b87f8/diff:/var/lib/docker/overlay2/e1b928a2483870df7fdf4adb3b4002f9effe1db7fbff925b24005f47290b2915/diff:/var/lib/docker/overlay2/4758c75ab63fd3f43ae7b654bc93566ded69acea0e92caf3043ef6eeeec9ca1b/diff:/var/lib/docker/overlay2/8558da5809877030d87982052e5011fb04b8bb9646d0f3c1d4aa10f2d7926592/diff:/var/lib/docker/overlay2/f6400cfae811f55736f55064c8949e18ac2dc1175a5149bb0382a79fa92
4f3f3/diff:/var/lib/docker/overlay2/875d5278ff8445b84261d4586c6a14fbbd9c13beff1fe9252591162e4d91a0dc/diff:/var/lib/docker/overlay2/12d9f85a229e1386e37fb609020fdcb5535c67ce811012fd646182559e4ee754/diff:/var/lib/docker/overlay2/d5c5b85272d7a8b0b62da488c8cba8d883a0adcfc9f1b2f6ad2f856b4f13e5f7/diff:/var/lib/docker/overlay2/5f6feb9e059c22491e287804f38f097fda984dc9376ba28ae810e13dcf27394f/diff:/var/lib/docker/overlay2/113a715b1135f09b959295e960aeaa36846ad54a6fe46fdd53d061bc3fe114e3/diff:/var/lib/docker/overlay2/265096a57a98130b8073aa41d0a22f0ada5a391943e490ac1c281a634e22cba0/diff:/var/lib/docker/overlay2/15f57e6dc9a6b382240a9335aae067454048b2eb6d0094f2d3c8c115be34311a/diff:/var/lib/docker/overlay2/45ca8d44d46a41b4493f52c0048d08b8d5ff4c1b28233ab8940f6422df1f1273/diff:/var/lib/docker/overlay2/d555005a13c251ef928389a1b06d251a17378cf3ec68af5a4d6c849412d3f69f/diff:/var/lib/docker/overlay2/80b8c06850dacfe2ca4db4e0fa4d2d0dd6997cf495a7f7da9b95a996694144c5/diff",
	                "MergedDir": "/var/lib/docker/overlay2/54107ef3bba69957e2904f92616edbbee2b856262d277e35dd0855c862772266/merged",
	                "UpperDir": "/var/lib/docker/overlay2/54107ef3bba69957e2904f92616edbbee2b856262d277e35dd0855c862772266/diff",
	                "WorkDir": "/var/lib/docker/overlay2/54107ef3bba69957e2904f92616edbbee2b856262d277e35dd0855c862772266/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-20220601105850-6708",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-20220601105850-6708/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-20220601105850-6708",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-20220601105850-6708",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-20220601105850-6708",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "1256a9334e29c4a4e5495d8f827d7d7664f9ca7db2fab32facb03db36a3b3af6",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49397"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49396"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49393"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49395"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49394"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/1256a9334e29",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-20220601105850-6708": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.58.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "3b070aceb311",
	                        "old-k8s-version-20220601105850-6708"
	                    ],
	                    "NetworkID": "99443bab5d3fa350d07dfff0b6c1624f2cd2601ac21b76ee77d57de53df02f62",
	                    "EndpointID": "f8f8bbe3bd358574febf4fc32d4b04efab03dd462466478278f465336715a20f",
	                    "Gateway": "192.168.58.1",
	                    "IPAddress": "192.168.58.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:3a:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-20220601105850-6708 -n old-k8s-version-20220601105850-6708
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/DeployApp FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/DeployApp]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-20220601105850-6708 logs -n 25
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/DeployApp logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------|------------------------------------------------|---------|----------------|---------------------|---------------------|
	| Command |                       Args                        |                    Profile                     |  User   |    Version     |     Start Time      |      End Time       |
	|---------|---------------------------------------------------|------------------------------------------------|---------|----------------|---------------------|---------------------|
	| delete  | -p                                                | enable-default-cni-20220601104837-6708         | jenkins | v1.26.0-beta.1 | 01 Jun 22 10:58 UTC | 01 Jun 22 10:58 UTC |
	|         | enable-default-cni-20220601104837-6708            |                                                |         |                |                     |                     |
	| start   | -p bridge-20220601104837-6708                     | bridge-20220601104837-6708                     | jenkins | v1.26.0-beta.1 | 01 Jun 22 10:57 UTC | 01 Jun 22 10:58 UTC |
	|         | --memory=2048                                     |                                                |         |                |                     |                     |
	|         | --alsologtostderr                                 |                                                |         |                |                     |                     |
	|         | --wait=true --wait-timeout=5m                     |                                                |         |                |                     |                     |
	|         | --cni=bridge --driver=docker                      |                                                |         |                |                     |                     |
	|         | --container-runtime=containerd                    |                                                |         |                |                     |                     |
	| ssh     | -p bridge-20220601104837-6708                     | bridge-20220601104837-6708                     | jenkins | v1.26.0-beta.1 | 01 Jun 22 10:58 UTC | 01 Jun 22 10:58 UTC |
	|         | pgrep -a kubelet                                  |                                                |         |                |                     |                     |
	| delete  | -p bridge-20220601104837-6708                     | bridge-20220601104837-6708                     | jenkins | v1.26.0-beta.1 | 01 Jun 22 10:58 UTC | 01 Jun 22 10:58 UTC |
	| start   | -p calico-20220601104839-6708                     | calico-20220601104839-6708                     | jenkins | v1.26.0-beta.1 | 01 Jun 22 10:57 UTC | 01 Jun 22 10:59 UTC |
	|         | --memory=2048                                     |                                                |         |                |                     |                     |
	|         | --alsologtostderr                                 |                                                |         |                |                     |                     |
	|         | --wait=true --wait-timeout=5m                     |                                                |         |                |                     |                     |
	|         | --cni=calico --driver=docker                      |                                                |         |                |                     |                     |
	|         | --container-runtime=containerd                    |                                                |         |                |                     |                     |
	| ssh     | -p calico-20220601104839-6708                     | calico-20220601104839-6708                     | jenkins | v1.26.0-beta.1 | 01 Jun 22 10:59 UTC | 01 Jun 22 10:59 UTC |
	|         | pgrep -a kubelet                                  |                                                |         |                |                     |                     |
	| start   | -p cilium-20220601104839-6708                     | cilium-20220601104839-6708                     | jenkins | v1.26.0-beta.1 | 01 Jun 22 10:58 UTC | 01 Jun 22 10:59 UTC |
	|         | --memory=2048                                     |                                                |         |                |                     |                     |
	|         | --alsologtostderr                                 |                                                |         |                |                     |                     |
	|         | --wait=true --wait-timeout=5m                     |                                                |         |                |                     |                     |
	|         | --cni=cilium --driver=docker                      |                                                |         |                |                     |                     |
	|         | --container-runtime=containerd                    |                                                |         |                |                     |                     |
	| ssh     | -p cilium-20220601104839-6708                     | cilium-20220601104839-6708                     | jenkins | v1.26.0-beta.1 | 01 Jun 22 10:59 UTC | 01 Jun 22 10:59 UTC |
	|         | pgrep -a kubelet                                  |                                                |         |                |                     |                     |
	| delete  | -p cilium-20220601104839-6708                     | cilium-20220601104839-6708                     | jenkins | v1.26.0-beta.1 | 01 Jun 22 10:59 UTC | 01 Jun 22 10:59 UTC |
	| start   | -p                                                | no-preload-20220601105939-6708                 | jenkins | v1.26.0-beta.1 | 01 Jun 22 10:59 UTC | 01 Jun 22 11:00 UTC |
	|         | no-preload-20220601105939-6708                    |                                                |         |                |                     |                     |
	|         | --memory=2200                                     |                                                |         |                |                     |                     |
	|         | --alsologtostderr                                 |                                                |         |                |                     |                     |
	|         | --wait=true --preload=false                       |                                                |         |                |                     |                     |
	|         | --driver=docker                                   |                                                |         |                |                     |                     |
	|         | --container-runtime=containerd                    |                                                |         |                |                     |                     |
	|         | --kubernetes-version=v1.23.6                      |                                                |         |                |                     |                     |
	| addons  | enable metrics-server -p                          | no-preload-20220601105939-6708                 | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:00 UTC | 01 Jun 22 11:00 UTC |
	|         | no-preload-20220601105939-6708                    |                                                |         |                |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                                |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain            |                                                |         |                |                     |                     |
	| stop    | -p                                                | no-preload-20220601105939-6708                 | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:00 UTC | 01 Jun 22 11:01 UTC |
	|         | no-preload-20220601105939-6708                    |                                                |         |                |                     |                     |
	|         | --alsologtostderr -v=3                            |                                                |         |                |                     |                     |
	| addons  | enable dashboard -p                               | no-preload-20220601105939-6708                 | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:01 UTC | 01 Jun 22 11:01 UTC |
	|         | no-preload-20220601105939-6708                    |                                                |         |                |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                                |         |                |                     |                     |
	| logs    | auto-20220601104837-6708 logs                     | auto-20220601104837-6708                       | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:03 UTC | 01 Jun 22 11:03 UTC |
	|         | -n 25                                             |                                                |         |                |                     |                     |
	| delete  | -p auto-20220601104837-6708                       | auto-20220601104837-6708                       | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:03 UTC | 01 Jun 22 11:03 UTC |
	| logs    | old-k8s-version-20220601105850-6708               | old-k8s-version-20220601105850-6708            | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:03 UTC | 01 Jun 22 11:03 UTC |
	|         | logs -n 25                                        |                                                |         |                |                     |                     |
	| start   | -p                                                | no-preload-20220601105939-6708                 | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:01 UTC | 01 Jun 22 11:06 UTC |
	|         | no-preload-20220601105939-6708                    |                                                |         |                |                     |                     |
	|         | --memory=2200                                     |                                                |         |                |                     |                     |
	|         | --alsologtostderr                                 |                                                |         |                |                     |                     |
	|         | --wait=true --preload=false                       |                                                |         |                |                     |                     |
	|         | --driver=docker                                   |                                                |         |                |                     |                     |
	|         | --container-runtime=containerd                    |                                                |         |                |                     |                     |
	|         | --kubernetes-version=v1.23.6                      |                                                |         |                |                     |                     |
	| ssh     | -p                                                | no-preload-20220601105939-6708                 | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:06 UTC | 01 Jun 22 11:06 UTC |
	|         | no-preload-20220601105939-6708                    |                                                |         |                |                     |                     |
	|         | sudo crictl images -o json                        |                                                |         |                |                     |                     |
	| pause   | -p                                                | no-preload-20220601105939-6708                 | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:06 UTC | 01 Jun 22 11:06 UTC |
	|         | no-preload-20220601105939-6708                    |                                                |         |                |                     |                     |
	|         | --alsologtostderr -v=1                            |                                                |         |                |                     |                     |
	| unpause | -p                                                | no-preload-20220601105939-6708                 | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:06 UTC | 01 Jun 22 11:06 UTC |
	|         | no-preload-20220601105939-6708                    |                                                |         |                |                     |                     |
	|         | --alsologtostderr -v=1                            |                                                |         |                |                     |                     |
	| delete  | -p                                                | no-preload-20220601105939-6708                 | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:06 UTC | 01 Jun 22 11:06 UTC |
	|         | no-preload-20220601105939-6708                    |                                                |         |                |                     |                     |
	| delete  | -p                                                | no-preload-20220601105939-6708                 | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:06 UTC | 01 Jun 22 11:06 UTC |
	|         | no-preload-20220601105939-6708                    |                                                |         |                |                     |                     |
	| delete  | -p                                                | disable-driver-mounts-20220601110654-6708      | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:06 UTC | 01 Jun 22 11:06 UTC |
	|         | disable-driver-mounts-20220601110654-6708         |                                                |         |                |                     |                     |
	| logs    | embed-certs-20220601110327-6708                   | embed-certs-20220601110327-6708                | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:08 UTC | 01 Jun 22 11:08 UTC |
	|         | logs -n 25                                        |                                                |         |                |                     |                     |
	| logs    | default-k8s-different-port-20220601110654-6708    | default-k8s-different-port-20220601110654-6708 | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:11 UTC | 01 Jun 22 11:11 UTC |
	|         | logs -n 25                                        |                                                |         |                |                     |                     |
	|---------|---------------------------------------------------|------------------------------------------------|---------|----------------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/06/01 11:06:54
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.18.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0601 11:06:54.667302  244383 out.go:296] Setting OutFile to fd 1 ...
	I0601 11:06:54.667430  244383 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0601 11:06:54.667448  244383 out.go:309] Setting ErrFile to fd 2...
	I0601 11:06:54.667455  244383 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0601 11:06:54.667611  244383 root.go:322] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/bin
	I0601 11:06:54.668037  244383 out.go:303] Setting JSON to false
	I0601 11:06:54.669846  244383 start.go:115] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":2969,"bootTime":1654078646,"procs":645,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.13.0-1027-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0601 11:06:54.669914  244383 start.go:125] virtualization: kvm guest
	I0601 11:06:54.672039  244383 out.go:177] * [default-k8s-different-port-20220601110654-6708] minikube v1.26.0-beta.1 on Ubuntu 20.04 (kvm/amd64)
	I0601 11:06:54.673519  244383 out.go:177]   - MINIKUBE_LOCATION=14079
	I0601 11:06:54.673532  244383 notify.go:193] Checking for updates...
	I0601 11:06:54.676498  244383 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0601 11:06:54.678066  244383 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig
	I0601 11:06:54.679578  244383 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube
	I0601 11:06:54.681049  244383 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0601 11:06:54.682891  244383 config.go:178] Loaded profile config "calico-20220601104839-6708": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.6
	I0601 11:06:54.683008  244383 config.go:178] Loaded profile config "embed-certs-20220601110327-6708": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.6
	I0601 11:06:54.683105  244383 config.go:178] Loaded profile config "old-k8s-version-20220601105850-6708": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.16.0
	I0601 11:06:54.683158  244383 driver.go:358] Setting default libvirt URI to qemu:///system
	I0601 11:06:54.724298  244383 docker.go:137] docker version: linux-20.10.16
	I0601 11:06:54.724374  244383 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0601 11:06:54.826819  244383 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:76 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:48 OomKillDisable:true NGoroutines:49 SystemTime:2022-06-01 11:06:54.7540349 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1027-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexServe
rAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662795776 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:20.10.16 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] ClientI
nfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0601 11:06:54.826932  244383 docker.go:254] overlay module found
	I0601 11:06:54.829003  244383 out.go:177] * Using the docker driver based on user configuration
	I0601 11:06:54.830315  244383 start.go:284] selected driver: docker
	I0601 11:06:54.830327  244383 start.go:806] validating driver "docker" against <nil>
	I0601 11:06:54.830352  244383 start.go:817] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0601 11:06:54.831265  244383 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0601 11:06:54.931062  244383 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:76 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:48 OomKillDisable:true NGoroutines:49 SystemTime:2022-06-01 11:06:54.859997014 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1027-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662795776 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:20.10.16 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0601 11:06:54.931188  244383 start_flags.go:292] no existing cluster config was found, will generate one from the flags 
	I0601 11:06:54.931414  244383 start_flags.go:847] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0601 11:06:54.933788  244383 out.go:177] * Using Docker driver with the root privilege
	I0601 11:06:54.935205  244383 cni.go:95] Creating CNI manager for ""
	I0601 11:06:54.935218  244383 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0601 11:06:54.935233  244383 cni.go:225] auto-setting extra-config to "kubelet.cni-conf-dir=/etc/cni/net.mk"
	I0601 11:06:54.935238  244383 cni.go:230] extra-config set to "kubelet.cni-conf-dir=/etc/cni/net.mk"
	I0601 11:06:54.935243  244383 start_flags.go:301] Found "CNI" CNI - setting NetworkPlugin=cni
	I0601 11:06:54.935250  244383 start_flags.go:306] config:
	{Name:default-k8s-different-port-20220601110654-6708 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:default-k8s-different-port-20220601110654-6708 Namespace:default APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0601 11:06:54.936846  244383 out.go:177] * Starting control plane node default-k8s-different-port-20220601110654-6708 in cluster default-k8s-different-port-20220601110654-6708
	I0601 11:06:54.938038  244383 cache.go:120] Beginning downloading kic base image for docker with containerd
	I0601 11:06:54.939519  244383 out.go:177] * Pulling base image ...
	I0601 11:06:54.940856  244383 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime containerd
	I0601 11:06:54.940881  244383 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a in local docker daemon
	I0601 11:06:54.940905  244383 preload.go:148] Found local preload: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.6-containerd-overlay2-amd64.tar.lz4
	I0601 11:06:54.940928  244383 cache.go:57] Caching tarball of preloaded images
	I0601 11:06:54.941154  244383 preload.go:174] Found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.6-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0601 11:06:54.941186  244383 cache.go:60] Finished verifying existence of preloaded tar for  v1.23.6 on containerd
	I0601 11:06:54.941308  244383 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/default-k8s-different-port-20220601110654-6708/config.json ...
	I0601 11:06:54.941333  244383 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/default-k8s-different-port-20220601110654-6708/config.json: {Name:mk8b3d87cba3844f82b835b906c4fc7fcf103163 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0601 11:06:54.986323  244383 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a in local docker daemon, skipping pull
	I0601 11:06:54.986351  244383 cache.go:141] gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a exists in daemon, skipping load
	I0601 11:06:54.986370  244383 cache.go:206] Successfully downloaded all kic artifacts
	I0601 11:06:54.986406  244383 start.go:352] acquiring machines lock for default-k8s-different-port-20220601110654-6708: {Name:mk7500f636009412c286b3a5b3a2182fb6b229b2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0601 11:06:54.986553  244383 start.go:356] acquired machines lock for "default-k8s-different-port-20220601110654-6708" in 123.17µs
	I0601 11:06:54.986588  244383 start.go:91] Provisioning new machine with config: &{Name:default-k8s-different-port-20220601110654-6708 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:default-k8s-different-por
t-20220601110654-6708 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.23.6 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:do
cker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false} &{Name: IP: Port:8444 KubernetesVersion:v1.23.6 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0601 11:06:54.986696  244383 start.go:131] createHost starting for "" (driver="docker")
	I0601 11:06:54.668423  232046 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:06:57.168205  232046 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:06:54.989283  244383 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0601 11:06:54.989495  244383 start.go:165] libmachine.API.Create for "default-k8s-different-port-20220601110654-6708" (driver="docker")
	I0601 11:06:54.989523  244383 client.go:168] LocalClient.Create starting
	I0601 11:06:54.989576  244383 main.go:134] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca.pem
	I0601 11:06:54.989602  244383 main.go:134] libmachine: Decoding PEM data...
	I0601 11:06:54.989620  244383 main.go:134] libmachine: Parsing certificate...
	I0601 11:06:54.989670  244383 main.go:134] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/cert.pem
	I0601 11:06:54.989686  244383 main.go:134] libmachine: Decoding PEM data...
	I0601 11:06:54.989697  244383 main.go:134] libmachine: Parsing certificate...
	I0601 11:06:54.990003  244383 cli_runner.go:164] Run: docker network inspect default-k8s-different-port-20220601110654-6708 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0601 11:06:55.021531  244383 cli_runner.go:211] docker network inspect default-k8s-different-port-20220601110654-6708 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0601 11:06:55.021592  244383 network_create.go:272] running [docker network inspect default-k8s-different-port-20220601110654-6708] to gather additional debugging logs...
	I0601 11:06:55.021618  244383 cli_runner.go:164] Run: docker network inspect default-k8s-different-port-20220601110654-6708
	W0601 11:06:55.051948  244383 cli_runner.go:211] docker network inspect default-k8s-different-port-20220601110654-6708 returned with exit code 1
	I0601 11:06:55.051984  244383 network_create.go:275] error running [docker network inspect default-k8s-different-port-20220601110654-6708]: docker network inspect default-k8s-different-port-20220601110654-6708: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: default-k8s-different-port-20220601110654-6708
	I0601 11:06:55.052003  244383 network_create.go:277] output of [docker network inspect default-k8s-different-port-20220601110654-6708]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: default-k8s-different-port-20220601110654-6708
	
	** /stderr **
	I0601 11:06:55.052049  244383 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0601 11:06:55.083654  244383 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc0001322e0] misses:0}
	I0601 11:06:55.083702  244383 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0601 11:06:55.083718  244383 network_create.go:115] attempt to create docker network default-k8s-different-port-20220601110654-6708 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0601 11:06:55.083760  244383 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true default-k8s-different-port-20220601110654-6708
	I0601 11:06:55.150185  244383 network_create.go:99] docker network default-k8s-different-port-20220601110654-6708 192.168.49.0/24 created
	I0601 11:06:55.150232  244383 kic.go:106] calculated static IP "192.168.49.2" for the "default-k8s-different-port-20220601110654-6708" container
	I0601 11:06:55.150301  244383 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0601 11:06:55.185029  244383 cli_runner.go:164] Run: docker volume create default-k8s-different-port-20220601110654-6708 --label name.minikube.sigs.k8s.io=default-k8s-different-port-20220601110654-6708 --label created_by.minikube.sigs.k8s.io=true
	I0601 11:06:55.218896  244383 oci.go:103] Successfully created a docker volume default-k8s-different-port-20220601110654-6708
	I0601 11:06:55.218982  244383 cli_runner.go:164] Run: docker run --rm --name default-k8s-different-port-20220601110654-6708-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-different-port-20220601110654-6708 --entrypoint /usr/bin/test -v default-k8s-different-port-20220601110654-6708:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a -d /var/lib
	I0601 11:06:55.773802  244383 oci.go:107] Successfully prepared a docker volume default-k8s-different-port-20220601110654-6708
	I0601 11:06:55.773849  244383 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime containerd
	I0601 11:06:55.773871  244383 kic.go:179] Starting extracting preloaded images to volume ...
	I0601 11:06:55.773932  244383 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.6-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v default-k8s-different-port-20220601110654-6708:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a -I lz4 -xf /preloaded.tar -C /extractDir
	I0601 11:06:59.334049  232046 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:07:01.667968  232046 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:07:03.152484  244383 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.6-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v default-k8s-different-port-20220601110654-6708:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a -I lz4 -xf /preloaded.tar -C /extractDir: (7.378487132s)
	I0601 11:07:03.152523  244383 kic.go:188] duration metric: took 7.378645 seconds to extract preloaded images to volume
	W0601 11:07:03.152655  244383 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0601 11:07:03.152754  244383 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0601 11:07:03.258344  244383 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname default-k8s-different-port-20220601110654-6708 --name default-k8s-different-port-20220601110654-6708 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-different-port-20220601110654-6708 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=default-k8s-different-port-20220601110654-6708 --network default-k8s-different-port-20220601110654-6708 --ip 192.168.49.2 --volume default-k8s-different-port-20220601110654-6708:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8444 --publish=127.0.0.1::8444 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b3
2c52a
	I0601 11:07:03.640637  244383 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220601110654-6708 --format={{.State.Running}}
	I0601 11:07:03.675247  244383 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220601110654-6708 --format={{.State.Status}}
	I0601 11:07:03.707758  244383 cli_runner.go:164] Run: docker exec default-k8s-different-port-20220601110654-6708 stat /var/lib/dpkg/alternatives/iptables
	I0601 11:07:03.767985  244383 oci.go:247] the created container "default-k8s-different-port-20220601110654-6708" has a running status.
	I0601 11:07:03.768013  244383 kic.go:210] Creating ssh key for kic: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/default-k8s-different-port-20220601110654-6708/id_rsa...
	I0601 11:07:03.823786  244383 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/default-k8s-different-port-20220601110654-6708/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0601 11:07:03.917787  244383 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220601110654-6708 --format={{.State.Status}}
	I0601 11:07:03.956706  244383 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0601 11:07:03.956735  244383 kic_runner.go:114] Args: [docker exec --privileged default-k8s-different-port-20220601110654-6708 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0601 11:07:04.044516  244383 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220601110654-6708 --format={{.State.Status}}
	I0601 11:07:04.081442  244383 machine.go:88] provisioning docker machine ...
	I0601 11:07:04.081477  244383 ubuntu.go:169] provisioning hostname "default-k8s-different-port-20220601110654-6708"
	I0601 11:07:04.081535  244383 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220601110654-6708
	I0601 11:07:04.119200  244383 main.go:134] libmachine: Using SSH client type: native
	I0601 11:07:04.119405  244383 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7da240] 0x7dd2a0 <nil>  [] 0s} 127.0.0.1 49417 <nil> <nil>}
	I0601 11:07:04.119425  244383 main.go:134] libmachine: About to run SSH command:
	sudo hostname default-k8s-different-port-20220601110654-6708 && echo "default-k8s-different-port-20220601110654-6708" | sudo tee /etc/hostname
	I0601 11:07:04.249668  244383 main.go:134] libmachine: SSH cmd err, output: <nil>: default-k8s-different-port-20220601110654-6708
	
	I0601 11:07:04.249734  244383 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220601110654-6708
	I0601 11:07:04.283443  244383 main.go:134] libmachine: Using SSH client type: native
	I0601 11:07:04.283593  244383 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7da240] 0x7dd2a0 <nil>  [] 0s} 127.0.0.1 49417 <nil> <nil>}
	I0601 11:07:04.283628  244383 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-different-port-20220601110654-6708' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-different-port-20220601110654-6708/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-different-port-20220601110654-6708' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0601 11:07:04.395587  244383 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0601 11:07:04.395617  244383 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/c
erts/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube}
	I0601 11:07:04.395643  244383 ubuntu.go:177] setting up certificates
	I0601 11:07:04.395652  244383 provision.go:83] configureAuth start
	I0601 11:07:04.395697  244383 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-different-port-20220601110654-6708
	I0601 11:07:04.427413  244383 provision.go:138] copyHostCerts
	I0601 11:07:04.427469  244383 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/key.pem, removing ...
	I0601 11:07:04.427481  244383 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/key.pem
	I0601 11:07:04.427543  244383 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/key.pem (1679 bytes)
	I0601 11:07:04.427622  244383 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.pem, removing ...
	I0601 11:07:04.427632  244383 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.pem
	I0601 11:07:04.427659  244383 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.pem (1082 bytes)
	I0601 11:07:04.427708  244383 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cert.pem, removing ...
	I0601 11:07:04.427721  244383 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cert.pem
	I0601 11:07:04.427753  244383 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cert.pem (1123 bytes)
	I0601 11:07:04.427802  244383 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca-key.pem org=jenkins.default-k8s-different-port-20220601110654-6708 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube default-k8s-different-port-20220601110654-6708]
	I0601 11:07:04.535631  244383 provision.go:172] copyRemoteCerts
	I0601 11:07:04.535685  244383 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0601 11:07:04.535726  244383 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220601110654-6708
	I0601 11:07:04.568780  244383 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49417 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/default-k8s-different-port-20220601110654-6708/id_rsa Username:docker}
	I0601 11:07:04.659152  244383 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0601 11:07:04.676610  244383 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/server.pem --> /etc/docker/server.pem (1306 bytes)
	I0601 11:07:04.694731  244383 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0601 11:07:04.711549  244383 provision.go:86] duration metric: configureAuth took 315.887909ms
	I0601 11:07:04.711573  244383 ubuntu.go:193] setting minikube options for container-runtime
	I0601 11:07:04.711735  244383 config.go:178] Loaded profile config "default-k8s-different-port-20220601110654-6708": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.6
	I0601 11:07:04.711748  244383 machine.go:91] provisioned docker machine in 630.288068ms
	I0601 11:07:04.711754  244383 client.go:171] LocalClient.Create took 9.722222745s
	I0601 11:07:04.711778  244383 start.go:173] duration metric: libmachine.API.Create for "default-k8s-different-port-20220601110654-6708" took 9.722275215s
	I0601 11:07:04.711793  244383 start.go:306] post-start starting for "default-k8s-different-port-20220601110654-6708" (driver="docker")
	I0601 11:07:04.711800  244383 start.go:316] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0601 11:07:04.711844  244383 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0601 11:07:04.711903  244383 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220601110654-6708
	I0601 11:07:04.745536  244383 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49417 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/default-k8s-different-port-20220601110654-6708/id_rsa Username:docker}
	I0601 11:07:04.831037  244383 ssh_runner.go:195] Run: cat /etc/os-release
	I0601 11:07:04.833655  244383 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0601 11:07:04.833679  244383 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0601 11:07:04.833703  244383 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0601 11:07:04.833716  244383 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0601 11:07:04.833726  244383 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/addons for local assets ...
	I0601 11:07:04.833775  244383 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files for local assets ...
	I0601 11:07:04.833870  244383 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files/etc/ssl/certs/67082.pem -> 67082.pem in /etc/ssl/certs
	I0601 11:07:04.833975  244383 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0601 11:07:04.840420  244383 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files/etc/ssl/certs/67082.pem --> /etc/ssl/certs/67082.pem (1708 bytes)
	I0601 11:07:04.857187  244383 start.go:309] post-start completed in 145.384397ms
	I0601 11:07:04.857493  244383 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-different-port-20220601110654-6708
	I0601 11:07:04.888747  244383 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/default-k8s-different-port-20220601110654-6708/config.json ...
	I0601 11:07:04.888963  244383 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0601 11:07:04.889000  244383 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220601110654-6708
	I0601 11:07:04.919352  244383 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49417 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/default-k8s-different-port-20220601110654-6708/id_rsa Username:docker}
	I0601 11:07:05.000243  244383 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0601 11:07:05.004050  244383 start.go:134] duration metric: createHost completed in 10.017341223s
	I0601 11:07:05.004075  244383 start.go:81] releasing machines lock for "default-k8s-different-port-20220601110654-6708", held for 10.017502791s
	I0601 11:07:05.004171  244383 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-different-port-20220601110654-6708
	I0601 11:07:05.035905  244383 ssh_runner.go:195] Run: systemctl --version
	I0601 11:07:05.035960  244383 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220601110654-6708
	I0601 11:07:05.035972  244383 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0601 11:07:05.036031  244383 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220601110654-6708
	I0601 11:07:05.069327  244383 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49417 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/default-k8s-different-port-20220601110654-6708/id_rsa Username:docker}
	I0601 11:07:05.070632  244383 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49417 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/default-k8s-different-port-20220601110654-6708/id_rsa Username:docker}
	I0601 11:07:05.175990  244383 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0601 11:07:05.186279  244383 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0601 11:07:05.194913  244383 docker.go:187] disabling docker service ...
	I0601 11:07:05.194953  244383 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0601 11:07:05.211132  244383 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0601 11:07:05.219763  244383 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0601 11:07:05.302855  244383 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0601 11:07:05.379942  244383 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0601 11:07:05.388684  244383 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0601 11:07:05.401125  244383 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*sandbox_image = .*$|sandbox_image = "k8s.gcr.io/pause:3.6"|' -i /etc/containerd/config.toml"
	I0601 11:07:05.408798  244383 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*restrict_oom_score_adj = .*$|restrict_oom_score_adj = false|' -i /etc/containerd/config.toml"
	I0601 11:07:05.416626  244383 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*SystemdCgroup = .*$|SystemdCgroup = false|' -i /etc/containerd/config.toml"
	I0601 11:07:05.424218  244383 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*conf_dir = .*$|conf_dir = "/etc/cni/net.mk"|' -i /etc/containerd/config.toml"
	I0601 11:07:05.431786  244383 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^# imports|imports = ["/etc/containerd/containerd.conf.d/02-containerd.conf"]|' -i /etc/containerd/config.toml"
	I0601 11:07:05.439234  244383 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc/containerd/containerd.conf.d && printf %!s(MISSING) "dmVyc2lvbiA9IDIK" | base64 -d | sudo tee /etc/containerd/containerd.conf.d/02-containerd.conf"
	I0601 11:07:05.451481  244383 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0601 11:07:05.457796  244383 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0601 11:07:05.464201  244383 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0601 11:07:05.540478  244383 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0601 11:07:05.650499  244383 start.go:447] Will wait 60s for socket path /run/containerd/containerd.sock
	I0601 11:07:05.650567  244383 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0601 11:07:05.654052  244383 start.go:468] Will wait 60s for crictl version
	I0601 11:07:05.654103  244383 ssh_runner.go:195] Run: sudo crictl version
	I0601 11:07:05.681128  244383 start.go:477] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.6.4
	RuntimeApiVersion:  v1alpha2
	I0601 11:07:05.681188  244383 ssh_runner.go:195] Run: containerd --version
	I0601 11:07:05.710828  244383 ssh_runner.go:195] Run: containerd --version
	I0601 11:07:05.741779  244383 out.go:177] * Preparing Kubernetes v1.23.6 on containerd 1.6.4 ...
	I0601 11:07:05.743207  244383 cli_runner.go:164] Run: docker network inspect default-k8s-different-port-20220601110654-6708 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0601 11:07:05.773719  244383 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0601 11:07:05.777293  244383 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0601 11:07:05.788623  244383 out.go:177]   - kubelet.cni-conf-dir=/etc/cni/net.mk
	I0601 11:07:05.790049  244383 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime containerd
	I0601 11:07:05.790117  244383 ssh_runner.go:195] Run: sudo crictl images --output json
	I0601 11:07:05.812809  244383 containerd.go:547] all images are preloaded for containerd runtime.
	I0601 11:07:05.812831  244383 containerd.go:461] Images already preloaded, skipping extraction
	I0601 11:07:05.812869  244383 ssh_runner.go:195] Run: sudo crictl images --output json
	I0601 11:07:05.834860  244383 containerd.go:547] all images are preloaded for containerd runtime.
	I0601 11:07:05.834879  244383 cache_images.go:84] Images are preloaded, skipping loading
	I0601 11:07:05.834947  244383 ssh_runner.go:195] Run: sudo crictl info
	I0601 11:07:05.857173  244383 cni.go:95] Creating CNI manager for ""
	I0601 11:07:05.857192  244383 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0601 11:07:05.857218  244383 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0601 11:07:05.857235  244383 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8444 KubernetesVersion:v1.23.6 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-different-port-20220601110654-6708 NodeName:default-k8s-different-port-20220601110654-6708 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.49.2
CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0601 11:07:05.857383  244383 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "default-k8s-different-port-20220601110654-6708"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.23.6
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0601 11:07:05.857471  244383 kubeadm.go:961] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.23.6/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cni-conf-dir=/etc/cni/net.mk --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=default-k8s-different-port-20220601110654-6708 --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.49.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.23.6 ClusterName:default-k8s-different-port-20220601110654-6708 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:}
	I0601 11:07:05.857530  244383 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.23.6
	I0601 11:07:05.864412  244383 binaries.go:44] Found k8s binaries, skipping transfer
	I0601 11:07:05.864485  244383 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0601 11:07:05.870921  244383 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (591 bytes)
	I0601 11:07:05.883133  244383 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0601 11:07:05.896240  244383 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2075 bytes)
	I0601 11:07:05.908996  244383 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0601 11:07:05.911816  244383 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0601 11:07:05.920740  244383 certs.go:54] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/default-k8s-different-port-20220601110654-6708 for IP: 192.168.49.2
	I0601 11:07:05.920863  244383 certs.go:182] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.key
	I0601 11:07:05.920906  244383 certs.go:182] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/proxy-client-ca.key
	I0601 11:07:05.920964  244383 certs.go:302] generating minikube-user signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/default-k8s-different-port-20220601110654-6708/client.key
	I0601 11:07:05.920984  244383 crypto.go:68] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/default-k8s-different-port-20220601110654-6708/client.crt with IP's: []
	I0601 11:07:06.190511  244383 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/default-k8s-different-port-20220601110654-6708/client.crt ...
	I0601 11:07:06.190541  244383 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/default-k8s-different-port-20220601110654-6708/client.crt: {Name:mk1f0de9f338c1565864d345295f211cd6b42042 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0601 11:07:06.190751  244383 crypto.go:164] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/default-k8s-different-port-20220601110654-6708/client.key ...
	I0601 11:07:06.190766  244383 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/default-k8s-different-port-20220601110654-6708/client.key: {Name:mk3abd1ec1bc2a3303283efb1d56bffeb558d491 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0601 11:07:06.190855  244383 certs.go:302] generating minikube signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/default-k8s-different-port-20220601110654-6708/apiserver.key.dd3b5fb2
	I0601 11:07:06.190870  244383 crypto.go:68] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/default-k8s-different-port-20220601110654-6708/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0601 11:07:06.411949  244383 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/default-k8s-different-port-20220601110654-6708/apiserver.crt.dd3b5fb2 ...
	I0601 11:07:06.411982  244383 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/default-k8s-different-port-20220601110654-6708/apiserver.crt.dd3b5fb2: {Name:mk21c89d2fdd1fdc207dd136def37f5d90a62bd7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0601 11:07:06.412202  244383 crypto.go:164] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/default-k8s-different-port-20220601110654-6708/apiserver.key.dd3b5fb2 ...
	I0601 11:07:06.412221  244383 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/default-k8s-different-port-20220601110654-6708/apiserver.key.dd3b5fb2: {Name:mk2f4aae6eb49e6251c3e6c8e6f0f6462f382896 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0601 11:07:06.412314  244383 certs.go:320] copying /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/default-k8s-different-port-20220601110654-6708/apiserver.crt.dd3b5fb2 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/default-k8s-different-port-20220601110654-6708/apiserver.crt
	I0601 11:07:06.412369  244383 certs.go:324] copying /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/default-k8s-different-port-20220601110654-6708/apiserver.key.dd3b5fb2 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/default-k8s-different-port-20220601110654-6708/apiserver.key
	I0601 11:07:06.412451  244383 certs.go:302] generating aggregator signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/default-k8s-different-port-20220601110654-6708/proxy-client.key
	I0601 11:07:06.412469  244383 crypto.go:68] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/default-k8s-different-port-20220601110654-6708/proxy-client.crt with IP's: []
	I0601 11:07:06.545552  244383 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/default-k8s-different-port-20220601110654-6708/proxy-client.crt ...
	I0601 11:07:06.545619  244383 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/default-k8s-different-port-20220601110654-6708/proxy-client.crt: {Name:mkee564e3149cd8be755ca3cbe99f47feac8e4a7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0601 11:07:06.545807  244383 crypto.go:164] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/default-k8s-different-port-20220601110654-6708/proxy-client.key ...
	I0601 11:07:06.545819  244383 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/default-k8s-different-port-20220601110654-6708/proxy-client.key: {Name:mk3354416a46b334b24512eafd987800637af3d8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0601 11:07:06.547104  244383 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/6708.pem (1338 bytes)
	W0601 11:07:06.547148  244383 certs.go:384] ignoring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/6708_empty.pem, impossibly tiny 0 bytes
	I0601 11:07:06.547174  244383 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca-key.pem (1679 bytes)
	I0601 11:07:06.547194  244383 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca.pem (1082 bytes)
	I0601 11:07:06.547234  244383 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/cert.pem (1123 bytes)
	I0601 11:07:06.547271  244383 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/key.pem (1679 bytes)
	I0601 11:07:06.547327  244383 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files/etc/ssl/certs/67082.pem (1708 bytes)
	I0601 11:07:06.547961  244383 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/default-k8s-different-port-20220601110654-6708/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0601 11:07:06.565921  244383 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/default-k8s-different-port-20220601110654-6708/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0601 11:07:06.584089  244383 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/default-k8s-different-port-20220601110654-6708/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0601 11:07:06.601191  244383 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/default-k8s-different-port-20220601110654-6708/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0601 11:07:06.618465  244383 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0601 11:07:06.635815  244383 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0601 11:07:06.653212  244383 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0601 11:07:06.670886  244383 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0601 11:07:06.687801  244383 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/6708.pem --> /usr/share/ca-certificates/6708.pem (1338 bytes)
	I0601 11:07:06.704953  244383 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files/etc/ssl/certs/67082.pem --> /usr/share/ca-certificates/67082.pem (1708 bytes)
	I0601 11:07:06.721444  244383 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0601 11:07:06.737875  244383 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0601 11:07:06.751738  244383 ssh_runner.go:195] Run: openssl version
	I0601 11:07:06.756719  244383 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/67082.pem && ln -fs /usr/share/ca-certificates/67082.pem /etc/ssl/certs/67082.pem"
	I0601 11:07:06.764146  244383 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/67082.pem
	I0601 11:07:06.767163  244383 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Jun  1 10:24 /usr/share/ca-certificates/67082.pem
	I0601 11:07:06.767216  244383 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/67082.pem
	I0601 11:07:06.771914  244383 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/67082.pem /etc/ssl/certs/3ec20f2e.0"
	I0601 11:07:06.778934  244383 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0601 11:07:06.786568  244383 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0601 11:07:06.789545  244383 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Jun  1 10:20 /usr/share/ca-certificates/minikubeCA.pem
	I0601 11:07:06.789607  244383 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0601 11:07:06.794248  244383 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0601 11:07:06.801364  244383 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6708.pem && ln -fs /usr/share/ca-certificates/6708.pem /etc/ssl/certs/6708.pem"
	I0601 11:07:06.808247  244383 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6708.pem
	I0601 11:07:06.811196  244383 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Jun  1 10:24 /usr/share/ca-certificates/6708.pem
	I0601 11:07:06.811252  244383 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6708.pem
	I0601 11:07:06.816241  244383 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6708.pem /etc/ssl/certs/51391683.0"
	I0601 11:07:06.823684  244383 kubeadm.go:395] StartCluster: {Name:default-k8s-different-port-20220601110654-6708 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:default-k8s-different-port-20220601110654-6708
Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8444 KubernetesVersion:v1.23.6 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mount
IP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0601 11:07:06.823768  244383 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0601 11:07:06.823809  244383 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0601 11:07:06.847418  244383 cri.go:87] found id: ""
	I0601 11:07:06.847481  244383 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0601 11:07:06.854612  244383 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0601 11:07:06.861596  244383 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0601 11:07:06.861652  244383 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0601 11:07:06.868516  244383 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0601 11:07:06.868568  244383 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0601 11:07:03.668636  232046 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:07:06.167338  232046 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:07:07.121183  244383 out.go:204]   - Generating certificates and keys ...
	I0601 11:07:09.218861  244383 out.go:204]   - Booting up control plane ...
	I0601 11:07:08.167714  232046 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:07:10.168162  232046 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:07:12.667278  232046 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:07:14.668246  232046 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:07:17.168197  232046 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:07:21.259795  244383 out.go:204]   - Configuring RBAC rules ...
	I0601 11:07:21.672636  244383 cni.go:95] Creating CNI manager for ""
	I0601 11:07:21.672654  244383 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0601 11:07:21.674533  244383 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0601 11:07:19.668390  232046 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:07:21.668490  232046 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:07:21.675845  244383 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0601 11:07:21.679515  244383 cni.go:189] applying CNI manifest using /var/lib/minikube/binaries/v1.23.6/kubectl ...
	I0601 11:07:21.679534  244383 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0601 11:07:21.692464  244383 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0601 11:07:22.465311  244383 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0601 11:07:22.465382  244383 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:07:22.465395  244383 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl label nodes minikube.k8s.io/version=v1.26.0-beta.1 minikube.k8s.io/commit=4a356b2b7b41c6be3e1e342298908c27bb98ce92 minikube.k8s.io/name=default-k8s-different-port-20220601110654-6708 minikube.k8s.io/updated_at=2022_06_01T11_07_22_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:07:22.521244  244383 ops.go:34] apiserver oom_adj: -16
	I0601 11:07:22.521263  244383 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:07:23.109047  244383 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:07:23.609743  244383 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:07:24.109036  244383 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:07:24.609779  244383 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:07:24.167646  232046 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:07:26.168090  232046 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:07:25.109823  244383 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:07:25.609061  244383 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:07:26.108863  244383 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:07:26.608780  244383 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:07:27.109061  244383 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:07:27.609116  244383 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:07:28.109699  244383 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:07:28.609047  244383 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:07:29.109170  244383 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:07:29.608851  244383 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:07:28.667871  232046 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:07:30.668198  232046 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:07:30.109055  244383 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:07:30.608852  244383 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:07:31.109521  244383 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:07:31.609057  244383 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:07:32.108853  244383 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:07:32.609531  244383 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:07:33.108838  244383 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:07:33.608822  244383 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:07:34.108973  244383 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:07:34.609839  244383 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:07:34.671502  244383 kubeadm.go:1045] duration metric: took 12.206180961s to wait for elevateKubeSystemPrivileges.
	I0601 11:07:34.671537  244383 kubeadm.go:397] StartCluster complete in 27.847858486s
	I0601 11:07:34.671557  244383 settings.go:142] acquiring lock: {Name:mk20a847233fa50399ab0a24280bffb8d8dbd41b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0601 11:07:34.671645  244383 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig
	I0601 11:07:34.673551  244383 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig: {Name:mkb0e16236e54fbc8651999f1dd70854c53de7db Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0601 11:07:35.189278  244383 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "default-k8s-different-port-20220601110654-6708" rescaled to 1
	I0601 11:07:35.189337  244383 start.go:208] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8444 KubernetesVersion:v1.23.6 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0601 11:07:35.191451  244383 out.go:177] * Verifying Kubernetes components...
	I0601 11:07:35.189391  244383 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0601 11:07:35.189390  244383 addons.go:415] enableAddons start: toEnable=map[], additional=[]
	I0601 11:07:35.189576  244383 config.go:178] Loaded profile config "default-k8s-different-port-20220601110654-6708": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.6
	I0601 11:07:35.192926  244383 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0601 11:07:35.192990  244383 addons.go:65] Setting storage-provisioner=true in profile "default-k8s-different-port-20220601110654-6708"
	I0601 11:07:35.193023  244383 addons.go:65] Setting default-storageclass=true in profile "default-k8s-different-port-20220601110654-6708"
	I0601 11:07:35.193071  244383 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-different-port-20220601110654-6708"
	I0601 11:07:35.193025  244383 addons.go:153] Setting addon storage-provisioner=true in "default-k8s-different-port-20220601110654-6708"
	W0601 11:07:35.193134  244383 addons.go:165] addon storage-provisioner should already be in state true
	I0601 11:07:35.193178  244383 host.go:66] Checking if "default-k8s-different-port-20220601110654-6708" exists ...
	I0601 11:07:35.193498  244383 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220601110654-6708 --format={{.State.Status}}
	I0601 11:07:35.193681  244383 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220601110654-6708 --format={{.State.Status}}
	I0601 11:07:35.209430  244383 node_ready.go:35] waiting up to 6m0s for node "default-k8s-different-port-20220601110654-6708" to be "Ready" ...
	I0601 11:07:35.237918  244383 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0601 11:07:35.239410  244383 addons.go:348] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0601 11:07:35.239425  244383 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0601 11:07:35.239470  244383 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220601110654-6708
	I0601 11:07:35.255735  244383 addons.go:153] Setting addon default-storageclass=true in "default-k8s-different-port-20220601110654-6708"
	W0601 11:07:35.255765  244383 addons.go:165] addon default-storageclass should already be in state true
	I0601 11:07:35.255799  244383 host.go:66] Checking if "default-k8s-different-port-20220601110654-6708" exists ...
	I0601 11:07:35.256352  244383 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220601110654-6708 --format={{.State.Status}}
	I0601 11:07:35.277557  244383 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49417 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/default-k8s-different-port-20220601110654-6708/id_rsa Username:docker}
	I0601 11:07:35.290858  244383 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0601 11:07:35.296059  244383 addons.go:348] installing /etc/kubernetes/addons/storageclass.yaml
	I0601 11:07:35.296086  244383 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0601 11:07:35.296137  244383 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220601110654-6708
	I0601 11:07:35.338006  244383 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49417 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/default-k8s-different-port-20220601110654-6708/id_rsa Username:docker}
	I0601 11:07:35.376722  244383 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0601 11:07:35.468185  244383 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0601 11:07:35.653594  244383 start.go:806] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS
	I0601 11:07:35.783515  244383 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0601 11:07:33.167882  232046 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:07:35.168161  232046 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:07:37.168296  232046 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:07:35.784841  244383 addons.go:417] enableAddons completed in 595.455746ms
	I0601 11:07:37.216016  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:07:39.667840  232046 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:07:42.167654  232046 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:07:39.717025  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:07:42.216640  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:07:44.667876  232046 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:07:47.168006  232046 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:07:44.716894  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:07:47.216117  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:07:49.217067  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:07:49.168183  232046 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:07:51.667932  232046 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:07:51.716491  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:07:54.216277  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:07:54.167913  232046 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:07:56.167953  232046 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:07:56.216761  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:07:58.717105  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:07:58.168275  232046 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:08:00.668037  232046 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:08:01.216388  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:08:03.716389  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:08:03.167969  232046 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:08:05.667837  232046 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:08:08.168013  232046 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:08:08.670567  232046 node_ready.go:38] duration metric: took 4m0.010022239s waiting for node "embed-certs-20220601110327-6708" to be "Ready" ...
	I0601 11:08:08.673338  232046 out.go:177] 
	W0601 11:08:08.675576  232046 out.go:239] X Exiting due to GUEST_START: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: timed out waiting for the condition
	W0601 11:08:08.675599  232046 out.go:239] * 
	W0601 11:08:08.676630  232046 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0601 11:08:08.678476  232046 out.go:177] 
	I0601 11:08:05.717011  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:08:08.215942  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:08:10.216368  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:08:12.216490  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:08:14.716947  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:08:17.216379  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:08:19.216687  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:08:21.216835  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:08:23.717175  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:08:26.216167  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:08:28.216729  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:08:30.216872  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:08:32.716452  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:08:35.216938  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:08:37.716649  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:08:39.716753  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:08:42.215917  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:08:44.216056  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:08:46.216458  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:08:48.216662  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:08:50.716633  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:08:52.716937  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:08:55.216648  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:08:57.716740  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:09:00.217259  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:09:02.716121  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:09:04.716668  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:09:06.716874  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:09:08.717065  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:09:11.216427  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:09:13.716769  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:09:16.216572  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:09:18.715438  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:09:20.716744  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:09:23.216674  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:09:25.716243  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:09:27.716345  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:09:29.716770  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:09:32.217046  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:09:34.716539  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:09:36.716922  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:09:38.717062  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:09:40.717196  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:09:43.216722  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:09:45.716601  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:09:47.716677  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:09:49.718424  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:09:52.216702  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:09:54.716437  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:09:57.216473  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:09:59.216703  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:10:01.716563  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:10:04.216144  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:10:06.216284  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:10:08.716579  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:10:11.216102  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:10:13.216282  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:10:15.716437  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:10:18.216335  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:10:20.715993  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:10:22.716802  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:10:25.216481  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:10:27.216823  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:10:29.716428  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:10:31.716531  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:10:34.216325  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:10:36.216576  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:10:38.216755  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:10:40.716532  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:10:43.216563  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:10:45.716341  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:10:47.716680  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:10:50.216218  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:10:52.716283  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:10:54.716952  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:10:57.216293  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:10:59.216999  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:11:01.716144  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:11:03.716378  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:11:05.716604  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:11:08.216289  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:11:10.216683  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:11:12.716931  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:11:15.216225  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:11:17.216376  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:11:19.716558  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:11:22.216186  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:11:24.216522  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:11:26.717180  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:11:29.216092  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:11:31.216231  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:11:33.716223  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:11:35.218171  244383 node_ready.go:38] duration metric: took 4m0.008704673s waiting for node "default-k8s-different-port-20220601110654-6708" to be "Ready" ...
	I0601 11:11:35.220452  244383 out.go:177] 
	W0601 11:11:35.221885  244383 out.go:239] X Exiting due to GUEST_START: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: timed out waiting for the condition
	W0601 11:11:35.221913  244383 out.go:239] * 
	W0601 11:11:35.222650  244383 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0601 11:11:35.224616  244383 out.go:177] 
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID
	bc21545827165       6de166512aa22       3 minutes ago       Exited              kindnet-cni               3                   0a6a4ae8178de
	01651d3598805       c21b0c7400f98       12 minutes ago      Running             kube-proxy                0                   f1d9aedf42d24
	0b9cf8973c884       b305571ca60a5       12 minutes ago      Running             kube-apiserver            0                   ac769aefe340a
	f18885873e44e       06a629a7e51cd       12 minutes ago      Running             kube-controller-manager   0                   3736e1d98ec61
	92f272874915c       b2756210eeabf       12 minutes ago      Running             etcd                      0                   41c0131fc288d
	e4d08ecd5adee       301ddc62b80b1       12 minutes ago      Running             kube-scheduler            0                   0a47511bd2aec
	
	* 
	* ==> containerd <==
	* -- Logs begin at Wed 2022-06-01 10:59:01 UTC, end at Wed 2022-06-01 11:11:48 UTC. --
	Jun 01 11:05:06 old-k8s-version-20220601105850-6708 containerd[511]: time="2022-06-01T11:05:06.288903624Z" level=warning msg="cleaning up after shim disconnected" id=df2a3875ec723f9785edea449611ef162e14636786905cd989570b375ffed8b4 namespace=k8s.io
	Jun 01 11:05:06 old-k8s-version-20220601105850-6708 containerd[511]: time="2022-06-01T11:05:06.288915658Z" level=info msg="cleaning up dead shim"
	Jun 01 11:05:06 old-k8s-version-20220601105850-6708 containerd[511]: time="2022-06-01T11:05:06.298269936Z" level=warning msg="cleanup warnings time=\"2022-06-01T11:05:06Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3102 runtime=io.containerd.runc.v2\n"
	Jun 01 11:05:07 old-k8s-version-20220601105850-6708 containerd[511]: time="2022-06-01T11:05:07.057612928Z" level=info msg="RemoveContainer for \"8f9cec9f497f70922114b5778ad83667fefb19394aa9a2008cd70a55ebd910b6\""
	Jun 01 11:05:07 old-k8s-version-20220601105850-6708 containerd[511]: time="2022-06-01T11:05:07.062471773Z" level=info msg="RemoveContainer for \"8f9cec9f497f70922114b5778ad83667fefb19394aa9a2008cd70a55ebd910b6\" returns successfully"
	Jun 01 11:05:18 old-k8s-version-20220601105850-6708 containerd[511]: time="2022-06-01T11:05:18.529320245Z" level=info msg="CreateContainer within sandbox \"0a6a4ae8178de388eac34f80c746eb474698ca59fb55ee2a3b96f3fe0be6b4cb\" for container &ContainerMetadata{Name:kindnet-cni,Attempt:2,}"
	Jun 01 11:05:18 old-k8s-version-20220601105850-6708 containerd[511]: time="2022-06-01T11:05:18.542361255Z" level=info msg="CreateContainer within sandbox \"0a6a4ae8178de388eac34f80c746eb474698ca59fb55ee2a3b96f3fe0be6b4cb\" for &ContainerMetadata{Name:kindnet-cni,Attempt:2,} returns container id \"d1efccf6d9e25e29664f8909e91d77c5ed7bdfc202c3a011aa009bb469f6588a\""
	Jun 01 11:05:18 old-k8s-version-20220601105850-6708 containerd[511]: time="2022-06-01T11:05:18.542763739Z" level=info msg="StartContainer for \"d1efccf6d9e25e29664f8909e91d77c5ed7bdfc202c3a011aa009bb469f6588a\""
	Jun 01 11:05:18 old-k8s-version-20220601105850-6708 containerd[511]: time="2022-06-01T11:05:18.566563238Z" level=info msg="RemoveContainer for \"df2a3875ec723f9785edea449611ef162e14636786905cd989570b375ffed8b4\""
	Jun 01 11:05:18 old-k8s-version-20220601105850-6708 containerd[511]: time="2022-06-01T11:05:18.570426793Z" level=info msg="RemoveContainer for \"df2a3875ec723f9785edea449611ef162e14636786905cd989570b375ffed8b4\" returns successfully"
	Jun 01 11:05:18 old-k8s-version-20220601105850-6708 containerd[511]: time="2022-06-01T11:05:18.678510329Z" level=info msg="StartContainer for \"d1efccf6d9e25e29664f8909e91d77c5ed7bdfc202c3a011aa009bb469f6588a\" returns successfully"
	Jun 01 11:07:58 old-k8s-version-20220601105850-6708 containerd[511]: time="2022-06-01T11:07:58.981613200Z" level=info msg="shim disconnected" id=d1efccf6d9e25e29664f8909e91d77c5ed7bdfc202c3a011aa009bb469f6588a
	Jun 01 11:07:58 old-k8s-version-20220601105850-6708 containerd[511]: time="2022-06-01T11:07:58.981683206Z" level=warning msg="cleaning up after shim disconnected" id=d1efccf6d9e25e29664f8909e91d77c5ed7bdfc202c3a011aa009bb469f6588a namespace=k8s.io
	Jun 01 11:07:58 old-k8s-version-20220601105850-6708 containerd[511]: time="2022-06-01T11:07:58.981699329Z" level=info msg="cleaning up dead shim"
	Jun 01 11:07:58 old-k8s-version-20220601105850-6708 containerd[511]: time="2022-06-01T11:07:58.992168635Z" level=warning msg="cleanup warnings time=\"2022-06-01T11:07:58Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3538 runtime=io.containerd.runc.v2\n"
	Jun 01 11:08:27 old-k8s-version-20220601105850-6708 containerd[511]: time="2022-06-01T11:08:27.529080919Z" level=info msg="CreateContainer within sandbox \"0a6a4ae8178de388eac34f80c746eb474698ca59fb55ee2a3b96f3fe0be6b4cb\" for container &ContainerMetadata{Name:kindnet-cni,Attempt:3,}"
	Jun 01 11:08:27 old-k8s-version-20220601105850-6708 containerd[511]: time="2022-06-01T11:08:27.540778103Z" level=info msg="CreateContainer within sandbox \"0a6a4ae8178de388eac34f80c746eb474698ca59fb55ee2a3b96f3fe0be6b4cb\" for &ContainerMetadata{Name:kindnet-cni,Attempt:3,} returns container id \"bc215458271657413ab56e24b8958038bee4a907217ca9bc43a5ecc1e2339443\""
	Jun 01 11:08:27 old-k8s-version-20220601105850-6708 containerd[511]: time="2022-06-01T11:08:27.541190322Z" level=info msg="StartContainer for \"bc215458271657413ab56e24b8958038bee4a907217ca9bc43a5ecc1e2339443\""
	Jun 01 11:08:27 old-k8s-version-20220601105850-6708 containerd[511]: time="2022-06-01T11:08:27.662166581Z" level=info msg="StartContainer for \"bc215458271657413ab56e24b8958038bee4a907217ca9bc43a5ecc1e2339443\" returns successfully"
	Jun 01 11:11:07 old-k8s-version-20220601105850-6708 containerd[511]: time="2022-06-01T11:11:07.892985964Z" level=info msg="shim disconnected" id=bc215458271657413ab56e24b8958038bee4a907217ca9bc43a5ecc1e2339443
	Jun 01 11:11:07 old-k8s-version-20220601105850-6708 containerd[511]: time="2022-06-01T11:11:07.893042710Z" level=warning msg="cleaning up after shim disconnected" id=bc215458271657413ab56e24b8958038bee4a907217ca9bc43a5ecc1e2339443 namespace=k8s.io
	Jun 01 11:11:07 old-k8s-version-20220601105850-6708 containerd[511]: time="2022-06-01T11:11:07.893063061Z" level=info msg="cleaning up dead shim"
	Jun 01 11:11:07 old-k8s-version-20220601105850-6708 containerd[511]: time="2022-06-01T11:11:07.902635003Z" level=warning msg="cleanup warnings time=\"2022-06-01T11:11:07Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3996 runtime=io.containerd.runc.v2\n"
	Jun 01 11:11:08 old-k8s-version-20220601105850-6708 containerd[511]: time="2022-06-01T11:11:08.565765811Z" level=info msg="RemoveContainer for \"d1efccf6d9e25e29664f8909e91d77c5ed7bdfc202c3a011aa009bb469f6588a\""
	Jun 01 11:11:08 old-k8s-version-20220601105850-6708 containerd[511]: time="2022-06-01T11:11:08.570873203Z" level=info msg="RemoveContainer for \"d1efccf6d9e25e29664f8909e91d77c5ed7bdfc202c3a011aa009bb469f6588a\" returns successfully"
	
	* 
	* ==> describe nodes <==
	* Name:               old-k8s-version-20220601105850-6708
	Roles:              master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-20220601105850-6708
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=4a356b2b7b41c6be3e1e342298908c27bb98ce92
	                    minikube.k8s.io/name=old-k8s-version-20220601105850-6708
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2022_06_01T10_59_29_0700
	                    minikube.k8s.io/version=v1.26.0-beta.1
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 01 Jun 2022 10:59:23 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 01 Jun 2022 11:11:23 +0000   Wed, 01 Jun 2022 10:59:20 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 01 Jun 2022 11:11:23 +0000   Wed, 01 Jun 2022 10:59:20 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 01 Jun 2022 11:11:23 +0000   Wed, 01 Jun 2022 10:59:20 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Wed, 01 Jun 2022 11:11:23 +0000   Wed, 01 Jun 2022 10:59:20 +0000   KubeletNotReady              runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Addresses:
	  InternalIP:  192.168.58.2
	  Hostname:    old-k8s-version-20220601105850-6708
	Capacity:
	 cpu:                8
	 ephemeral-storage:  304695084Ki
	 hugepages-1Gi:      0
	 hugepages-2Mi:      0
	 memory:             32873824Ki
	 pods:               110
	Allocatable:
	 cpu:                8
	 ephemeral-storage:  304695084Ki
	 hugepages-1Gi:      0
	 hugepages-2Mi:      0
	 memory:             32873824Ki
	 pods:               110
	System Info:
	 Machine ID:                 e0d7477b601740b2a7c32c13851e505c
	 System UUID:                cf752223-716a-46c7-b06a-74cba9af00dc
	 Boot ID:                    6ec46557-2d76-4d83-8353-3ee04fd961c4
	 Kernel Version:             5.13.0-1027-gcp
	 OS Image:                   Ubuntu 20.04.4 LTS
	 Operating System:           linux
	 Architecture:               amd64
	 Container Runtime Version:  containerd://1.6.4
	 Kubelet Version:            v1.16.0
	 Kube-Proxy Version:         v1.16.0
	PodCIDR:                     10.244.0.0/24
	PodCIDRs:                    10.244.0.0/24
	Non-terminated Pods:         (6 in total)
	  Namespace                  Name                                                           CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                  ----                                                           ------------  ----------  ---------------  -------------  ---
	  kube-system                etcd-old-k8s-version-20220601105850-6708                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                kindnet-rvdm8                                                  100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      12m
	  kube-system                kube-apiserver-old-k8s-version-20220601105850-6708             250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                kube-controller-manager-old-k8s-version-20220601105850-6708    200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                kube-proxy-9db28                                               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                kube-scheduler-old-k8s-version-20220601105850-6708             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                650m (8%!)(MISSING)  100m (1%!)(MISSING)
	  memory             50Mi (0%!)(MISSING)  50Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From                                             Message
	  ----    ------                   ----               ----                                             -------
	  Normal  NodeHasSufficientMemory  12m (x8 over 12m)  kubelet, old-k8s-version-20220601105850-6708     Node old-k8s-version-20220601105850-6708 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12m (x8 over 12m)  kubelet, old-k8s-version-20220601105850-6708     Node old-k8s-version-20220601105850-6708 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12m (x7 over 12m)  kubelet, old-k8s-version-20220601105850-6708     Node old-k8s-version-20220601105850-6708 status is now: NodeHasSufficientPID
	  Normal  Starting                 12m                kube-proxy, old-k8s-version-20220601105850-6708  Starting kube-proxy.
	
	* 
	* ==> dmesg <==
	* [Jun 1 10:57] IPv4: martian source 10.244.0.2 from 10.244.0.2, on dev bridge
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff ce 23 50 3b c9 fd 08 06
	[  +0.595787] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ce 23 50 3b c9 fd 08 06
	[  +0.586201] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 66 8a d2 ee 2a 9a 08 06
	[Jun 1 10:58] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 9a c7 83 3b 8c 1d 08 06
	[  +0.000302] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 66 8a d2 ee 2a 9a 08 06
	[ +12.725641] IPv4: martian source 10.244.0.2 from 10.244.0.2, on dev bridge
	[  +0.000013] ll header: 00000000: ff ff ff ff ff ff 3a 82 da d0 8c 8d 08 06
	[  +0.906380] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 3a 82 da d0 8c 8d 08 06
	[  +0.302806] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff aa d1 05 1a ef 66 08 06
	[ +13.606547] IPv4: martian source 10.85.0.2 from 10.85.0.2, on dev cni0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 7e ad f3 21 27 5b 08 06
	[  +8.626375] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff f6 c1 ef be 01 dd 08 06
	[  +0.000348] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff aa d1 05 1a ef 66 08 06
	[  +8.278909] IPv4: martian source 10.85.0.2 from 10.85.0.2, on dev cni0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 72 fb cd eb 09 11 08 06
	[Jun 1 11:01] IPv4: martian destination 127.0.0.11 from 10.244.0.2, dev vethb04b9442
	
	* 
	* ==> etcd [92f272874915c4877257c68e1d43539f7183cbef97f4b0837113afe72f1cdb3c] <==
	* 2022-06-01 10:59:19.557971 W | auth: simple token is not cryptographically signed
	2022-06-01 10:59:19.561258 I | etcdserver: starting server... [version: 3.3.15, cluster version: to_be_decided]
	2022-06-01 10:59:19.561609 I | etcdserver: b2c6679ac05f2cf1 as single-node; fast-forwarding 9 ticks (election ticks 10)
	2022-06-01 10:59:19.561830 I | etcdserver/membership: added member b2c6679ac05f2cf1 [https://192.168.58.2:2380] to cluster 3a56e4ca95e2355c
	2022-06-01 10:59:19.563596 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, ca = , trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	2022-06-01 10:59:19.563780 I | embed: listening for metrics on http://192.168.58.2:2381
	2022-06-01 10:59:19.563857 I | embed: listening for metrics on http://127.0.0.1:2381
	2022-06-01 10:59:20.398057 I | raft: b2c6679ac05f2cf1 is starting a new election at term 1
	2022-06-01 10:59:20.398087 I | raft: b2c6679ac05f2cf1 became candidate at term 2
	2022-06-01 10:59:20.398113 I | raft: b2c6679ac05f2cf1 received MsgVoteResp from b2c6679ac05f2cf1 at term 2
	2022-06-01 10:59:20.398122 I | raft: b2c6679ac05f2cf1 became leader at term 2
	2022-06-01 10:59:20.398127 I | raft: raft.node: b2c6679ac05f2cf1 elected leader b2c6679ac05f2cf1 at term 2
	2022-06-01 10:59:20.398431 I | etcdserver: published {Name:old-k8s-version-20220601105850-6708 ClientURLs:[https://192.168.58.2:2379]} to cluster 3a56e4ca95e2355c
	2022-06-01 10:59:20.398459 I | embed: ready to serve client requests
	2022-06-01 10:59:20.398511 I | embed: ready to serve client requests
	2022-06-01 10:59:20.398527 I | etcdserver: setting up the initial cluster version to 3.3
	2022-06-01 10:59:20.399286 N | etcdserver/membership: set the initial cluster version to 3.3
	2022-06-01 10:59:20.399361 I | etcdserver/api: enabled capabilities for version 3.3
	2022-06-01 10:59:20.400666 I | embed: serving client requests on 192.168.58.2:2379
	2022-06-01 10:59:20.401288 I | embed: serving client requests on 127.0.0.1:2379
	2022-06-01 11:00:27.079535 W | etcdserver: request "header:<ID:3238511576856218971 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/masterleases/192.168.58.2\" mod_revision:394 > success:<request_put:<key:\"/registry/masterleases/192.168.58.2\" value_size:67 lease:3238511576856218969 >> failure:<request_range:<key:\"/registry/masterleases/192.168.58.2\" > >>" with result "size:16" took too long (105.707876ms) to execute
	2022-06-01 11:00:27.370158 W | etcdserver: read-only range request "key:\"/registry/apiregistration.k8s.io/apiservices\" range_end:\"/registry/apiregistration.k8s.io/apiservicet\" count_only:true " with result "range_response_count:0 size:7" took too long (109.381517ms) to execute
	2022-06-01 11:07:02.455767 W | etcdserver: read-only range request "key:\"/registry/pods/default/\" range_end:\"/registry/pods/default0\" " with result "range_response_count:1 size:799" took too long (253.04121ms) to execute
	2022-06-01 11:09:20.420123 I | mvcc: store.index: compact 468
	2022-06-01 11:09:20.420844 I | mvcc: finished scheduled compaction at 468 (took 384.008µs)
	
	* 
	* ==> kernel <==
	*  11:11:48 up 54 min,  0 users,  load average: 0.50, 1.15, 1.59
	Linux old-k8s-version-20220601105850-6708 5.13.0-1027-gcp #32~20.04.1-Ubuntu SMP Thu May 26 10:53:08 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.4 LTS"
	
	* 
	* ==> kube-apiserver [0b9cf8973c8844f5d3f241696625e5764fbd79a0c0fa64202fca8a67567e726a] <==
	* I0601 10:59:23.500638       1 establishing_controller.go:73] Starting EstablishingController
	I0601 10:59:23.500713       1 nonstructuralschema_controller.go:191] Starting NonStructuralSchemaConditionController
	I0601 10:59:23.500739       1 apiapproval_controller.go:185] Starting KubernetesAPIApprovalPolicyConformantConditionController
	E0601 10:59:23.502269       1 controller.go:154] Unable to remove old endpoints from kubernetes service: StorageError: key not found, Code: 1, Key: /registry/masterleases/192.168.58.2, ResourceVersion: 0, AdditionalErrorMsg: 
	I0601 10:59:23.600240       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0601 10:59:23.600365       1 cache.go:39] Caches are synced for autoregister controller
	I0601 10:59:23.600658       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0601 10:59:23.653039       1 shared_informer.go:204] Caches are synced for crd-autoregister 
	I0601 10:59:24.500177       1 controller.go:107] OpenAPI AggregationController: Processing item 
	I0601 10:59:24.500198       1 controller.go:130] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I0601 10:59:24.500206       1 controller.go:130] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0601 10:59:24.504915       1 storage_scheduling.go:139] created PriorityClass system-node-critical with value 2000001000
	I0601 10:59:24.507571       1 storage_scheduling.go:139] created PriorityClass system-cluster-critical with value 2000000000
	I0601 10:59:24.507599       1 storage_scheduling.go:148] all system priority classes are created successfully or already exist.
	I0601 10:59:25.260704       1 controller.go:606] quota admission added evaluator for: leases.coordination.k8s.io
	I0601 10:59:26.281264       1 controller.go:606] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0601 10:59:26.561277       1 controller.go:606] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	W0601 10:59:26.876565       1 lease.go:222] Resetting endpoints for master service "kubernetes" to [192.168.58.2]
	I0601 10:59:26.877208       1 controller.go:606] quota admission added evaluator for: endpoints
	I0601 10:59:27.764458       1 controller.go:606] quota admission added evaluator for: serviceaccounts
	I0601 10:59:28.362361       1 controller.go:606] quota admission added evaluator for: deployments.apps
	I0601 10:59:28.727470       1 controller.go:606] quota admission added evaluator for: daemonsets.apps
	I0601 10:59:44.218023       1 controller.go:606] quota admission added evaluator for: replicasets.apps
	I0601 10:59:44.232173       1 controller.go:606] quota admission added evaluator for: events.events.k8s.io
	I0601 10:59:44.620734       1 controller.go:606] quota admission added evaluator for: controllerrevisions.apps
	
	* 
	* ==> kube-controller-manager [f18885873e44ef000cea8b73305d4b972b24f41b3a821ebf6ed2fbb3c400745d] <==
	* W0601 10:59:44.510205       1 actual_state_of_world.go:506] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="old-k8s-version-20220601105850-6708" does not exist
	I0601 10:59:44.513080       1 shared_informer.go:204] Caches are synced for attach detach 
	I0601 10:59:44.515251       1 shared_informer.go:204] Caches are synced for taint 
	I0601 10:59:44.515323       1 taint_manager.go:186] Starting NoExecuteTaintManager
	I0601 10:59:44.515326       1 node_lifecycle_controller.go:1208] Initializing eviction metric for zone: 
	W0601 10:59:44.515439       1 node_lifecycle_controller.go:903] Missing timestamp for Node old-k8s-version-20220601105850-6708. Assuming now as a timestamp.
	I0601 10:59:44.515423       1 event.go:255] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"old-k8s-version-20220601105850-6708", UID:"9a70fc40-abc0-4b88-bdf7-4c4dea7658d1", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RegisteredNode' Node old-k8s-version-20220601105850-6708 event: Registered Node old-k8s-version-20220601105850-6708 in Controller
	I0601 10:59:44.515473       1 node_lifecycle_controller.go:1058] Controller detected that all Nodes are not-Ready. Entering master disruption mode.
	I0601 10:59:44.562663       1 shared_informer.go:204] Caches are synced for persistent volume 
	I0601 10:59:44.562672       1 shared_informer.go:204] Caches are synced for stateful set 
	I0601 10:59:44.566561       1 shared_informer.go:204] Caches are synced for node 
	I0601 10:59:44.566584       1 range_allocator.go:172] Starting range CIDR allocator
	I0601 10:59:44.566598       1 shared_informer.go:197] Waiting for caches to sync for cidrallocator
	I0601 10:59:44.578102       1 shared_informer.go:204] Caches are synced for TTL 
	I0601 10:59:44.616316       1 shared_informer.go:204] Caches are synced for daemon sets 
	I0601 10:59:44.631230       1 event.go:255] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"kube-system", Name:"kube-proxy", UID:"46c63a7a-da9c-4b21-b27e-3ab2cc1bf42c", APIVersion:"apps/v1", ResourceVersion:"209", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kube-proxy-9db28
	I0601 10:59:44.633700       1 event.go:255] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"kube-system", Name:"kindnet", UID:"aee4ae9e-2298-4d10-81af-933537f4ccd9", APIVersion:"apps/v1", ResourceVersion:"223", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kindnet-rvdm8
	I0601 10:59:44.666231       1 shared_informer.go:204] Caches are synced for garbage collector 
	I0601 10:59:44.666319       1 garbagecollector.go:139] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0601 10:59:44.667908       1 shared_informer.go:204] Caches are synced for cidrallocator 
	I0601 10:59:44.674142       1 range_allocator.go:359] Set node old-k8s-version-20220601105850-6708 PodCIDR to [10.244.0.0/24]
	I0601 10:59:44.709226       1 shared_informer.go:204] Caches are synced for garbage collector 
	I0601 10:59:44.718073       1 shared_informer.go:204] Caches are synced for resource quota 
	I0601 10:59:45.806836       1 shared_informer.go:197] Waiting for caches to sync for resource quota
	I0601 10:59:45.907072       1 shared_informer.go:204] Caches are synced for resource quota 
	
	* 
	* ==> kube-proxy [01651d3598805140172b9f0f86349cd8cad0f336647501ce25f9120bcb1f7dc3] <==
	* W0601 10:59:45.684675       1 server_others.go:329] Flag proxy-mode="" unknown, assuming iptables proxy
	I0601 10:59:45.696662       1 node.go:135] Successfully retrieved node IP: 192.168.58.2
	I0601 10:59:45.696711       1 server_others.go:149] Using iptables Proxier.
	I0601 10:59:45.697092       1 server.go:529] Version: v1.16.0
	I0601 10:59:45.698531       1 config.go:313] Starting service config controller
	I0601 10:59:45.698559       1 shared_informer.go:197] Waiting for caches to sync for service config
	I0601 10:59:45.698582       1 config.go:131] Starting endpoints config controller
	I0601 10:59:45.698600       1 shared_informer.go:197] Waiting for caches to sync for endpoints config
	I0601 10:59:45.798783       1 shared_informer.go:204] Caches are synced for endpoints config 
	I0601 10:59:45.799058       1 shared_informer.go:204] Caches are synced for service config 
	
	* 
	* ==> kube-scheduler [e4d08ecd5adee34f6ccfaeb042d497cedc44597ee436ef3a30c0c98e725c3582] <==
	* I0601 10:59:23.568522       1 deprecated_insecure_serving.go:51] Serving healthz insecurely on [::]:10251
	I0601 10:59:23.569198       1 secure_serving.go:123] Serving securely on 127.0.0.1:10259
	E0601 10:59:23.658434       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0601 10:59:23.660485       1 reflector.go:123] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:236: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0601 10:59:23.661016       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0601 10:59:23.662119       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0601 10:59:23.665509       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0601 10:59:23.665685       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0601 10:59:23.665696       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0601 10:59:23.665786       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0601 10:59:23.665877       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0601 10:59:23.666262       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0601 10:59:23.667640       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0601 10:59:24.659538       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0601 10:59:24.661616       1 reflector.go:123] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:236: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0601 10:59:24.662868       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0601 10:59:24.664538       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0601 10:59:24.666434       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0601 10:59:24.667461       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0601 10:59:24.668599       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0601 10:59:24.669697       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0601 10:59:24.670863       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0601 10:59:24.672730       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0601 10:59:24.673763       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0601 10:59:45.971438       1 factory.go:585] pod is already present in the activeQ
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Wed 2022-06-01 10:59:01 UTC, end at Wed 2022-06-01 11:11:49 UTC. --
	Jun 01 11:10:08 old-k8s-version-20220601105850-6708 kubelet[914]: E0601 11:10:08.784763     914 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Jun 01 11:10:13 old-k8s-version-20220601105850-6708 kubelet[914]: E0601 11:10:13.785496     914 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Jun 01 11:10:18 old-k8s-version-20220601105850-6708 kubelet[914]: E0601 11:10:18.786232     914 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Jun 01 11:10:23 old-k8s-version-20220601105850-6708 kubelet[914]: E0601 11:10:23.786988     914 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Jun 01 11:10:28 old-k8s-version-20220601105850-6708 kubelet[914]: E0601 11:10:28.787783     914 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Jun 01 11:10:33 old-k8s-version-20220601105850-6708 kubelet[914]: E0601 11:10:33.788512     914 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Jun 01 11:10:38 old-k8s-version-20220601105850-6708 kubelet[914]: E0601 11:10:38.789326     914 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Jun 01 11:10:43 old-k8s-version-20220601105850-6708 kubelet[914]: E0601 11:10:43.790070     914 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Jun 01 11:10:48 old-k8s-version-20220601105850-6708 kubelet[914]: E0601 11:10:48.790925     914 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Jun 01 11:10:53 old-k8s-version-20220601105850-6708 kubelet[914]: E0601 11:10:53.791797     914 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Jun 01 11:10:58 old-k8s-version-20220601105850-6708 kubelet[914]: E0601 11:10:58.792624     914 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Jun 01 11:11:03 old-k8s-version-20220601105850-6708 kubelet[914]: E0601 11:11:03.793544     914 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Jun 01 11:11:08 old-k8s-version-20220601105850-6708 kubelet[914]: E0601 11:11:08.565904     914 pod_workers.go:191] Error syncing pod 0648d955-2d20-449d-88b9-57fb087825d8 ("kindnet-rvdm8_kube-system(0648d955-2d20-449d-88b9-57fb087825d8)"), skipping: failed to "StartContainer" for "kindnet-cni" with CrashLoopBackOff: "back-off 40s restarting failed container=kindnet-cni pod=kindnet-rvdm8_kube-system(0648d955-2d20-449d-88b9-57fb087825d8)"
	Jun 01 11:11:08 old-k8s-version-20220601105850-6708 kubelet[914]: E0601 11:11:08.794295     914 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Jun 01 11:11:13 old-k8s-version-20220601105850-6708 kubelet[914]: E0601 11:11:13.795086     914 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Jun 01 11:11:18 old-k8s-version-20220601105850-6708 kubelet[914]: E0601 11:11:18.795860     914 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Jun 01 11:11:20 old-k8s-version-20220601105850-6708 kubelet[914]: E0601 11:11:20.527035     914 pod_workers.go:191] Error syncing pod 0648d955-2d20-449d-88b9-57fb087825d8 ("kindnet-rvdm8_kube-system(0648d955-2d20-449d-88b9-57fb087825d8)"), skipping: failed to "StartContainer" for "kindnet-cni" with CrashLoopBackOff: "back-off 40s restarting failed container=kindnet-cni pod=kindnet-rvdm8_kube-system(0648d955-2d20-449d-88b9-57fb087825d8)"
	Jun 01 11:11:23 old-k8s-version-20220601105850-6708 kubelet[914]: E0601 11:11:23.796712     914 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Jun 01 11:11:28 old-k8s-version-20220601105850-6708 kubelet[914]: E0601 11:11:28.797454     914 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Jun 01 11:11:33 old-k8s-version-20220601105850-6708 kubelet[914]: E0601 11:11:33.526874     914 pod_workers.go:191] Error syncing pod 0648d955-2d20-449d-88b9-57fb087825d8 ("kindnet-rvdm8_kube-system(0648d955-2d20-449d-88b9-57fb087825d8)"), skipping: failed to "StartContainer" for "kindnet-cni" with CrashLoopBackOff: "back-off 40s restarting failed container=kindnet-cni pod=kindnet-rvdm8_kube-system(0648d955-2d20-449d-88b9-57fb087825d8)"
	Jun 01 11:11:33 old-k8s-version-20220601105850-6708 kubelet[914]: E0601 11:11:33.798326     914 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Jun 01 11:11:38 old-k8s-version-20220601105850-6708 kubelet[914]: E0601 11:11:38.799114     914 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Jun 01 11:11:43 old-k8s-version-20220601105850-6708 kubelet[914]: E0601 11:11:43.799970     914 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Jun 01 11:11:44 old-k8s-version-20220601105850-6708 kubelet[914]: E0601 11:11:44.526997     914 pod_workers.go:191] Error syncing pod 0648d955-2d20-449d-88b9-57fb087825d8 ("kindnet-rvdm8_kube-system(0648d955-2d20-449d-88b9-57fb087825d8)"), skipping: failed to "StartContainer" for "kindnet-cni" with CrashLoopBackOff: "back-off 40s restarting failed container=kindnet-cni pod=kindnet-rvdm8_kube-system(0648d955-2d20-449d-88b9-57fb087825d8)"
	Jun 01 11:11:48 old-k8s-version-20220601105850-6708 kubelet[914]: E0601 11:11:48.800849     914 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-20220601105850-6708 -n old-k8s-version-20220601105850-6708
helpers_test.go:261: (dbg) Run:  kubectl --context old-k8s-version-20220601105850-6708 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:270: non-running pods: busybox coredns-5644d7b6d9-5z28m storage-provisioner
helpers_test.go:272: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/DeployApp]: describe non-running pods <======
helpers_test.go:275: (dbg) Run:  kubectl --context old-k8s-version-20220601105850-6708 describe pod busybox coredns-5644d7b6d9-5z28m storage-provisioner
helpers_test.go:275: (dbg) Non-zero exit: kubectl --context old-k8s-version-20220601105850-6708 describe pod busybox coredns-5644d7b6d9-5z28m storage-provisioner: exit status 1 (57.586155ms)

                                                
                                                
-- stdout --
	Name:         busybox
	Namespace:    default
	Priority:     0
	Node:         <none>
	Labels:       integration-test=busybox
	Annotations:  <none>
	Status:       Pending
	IP:           
	IPs:          <none>
	Containers:
	  busybox:
	    Image:      gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sleep
	      3600
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from default-token-vdddm (ro)
	Conditions:
	  Type           Status
	  PodScheduled   False 
	Volumes:
	  default-token-vdddm:
	    Type:        Secret (a volume populated by a Secret)
	    SecretName:  default-token-vdddm
	    Optional:    false
	QoS Class:       BestEffort
	Node-Selectors:  <none>
	Tolerations:     node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                 node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason            Age                    From               Message
	  ----     ------            ----                   ----               -------
	  Warning  FailedScheduling  8m2s                   default-scheduler  0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.
	  Warning  FailedScheduling  5m26s (x1 over 6m56s)  default-scheduler  0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "coredns-5644d7b6d9-5z28m" not found
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:277: kubectl --context old-k8s-version-20220601105850-6708 describe pod busybox coredns-5644d7b6d9-5z28m storage-provisioner: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/DeployApp]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-20220601105850-6708
helpers_test.go:235: (dbg) docker inspect old-k8s-version-20220601105850-6708:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "3b070aceb31160fc5815dbff4454d13f7f86eaa9a885ff77acd0f313e36673c0",
	        "Created": "2022-06-01T10:59:00.78565124Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 214562,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-06-01T10:59:01.206141646Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:5fc9565d342f677dd8987c0c7656d8d58147ab45c932c9076935b38a770e4cf7",
	        "ResolvConfPath": "/var/lib/docker/containers/3b070aceb31160fc5815dbff4454d13f7f86eaa9a885ff77acd0f313e36673c0/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/3b070aceb31160fc5815dbff4454d13f7f86eaa9a885ff77acd0f313e36673c0/hostname",
	        "HostsPath": "/var/lib/docker/containers/3b070aceb31160fc5815dbff4454d13f7f86eaa9a885ff77acd0f313e36673c0/hosts",
	        "LogPath": "/var/lib/docker/containers/3b070aceb31160fc5815dbff4454d13f7f86eaa9a885ff77acd0f313e36673c0/3b070aceb31160fc5815dbff4454d13f7f86eaa9a885ff77acd0f313e36673c0-json.log",
	        "Name": "/old-k8s-version-20220601105850-6708",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-20220601105850-6708:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-20220601105850-6708",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/54107ef3bba69957e2904f92616edbbee2b856262d277e35dd0855c862772266-init/diff:/var/lib/docker/overlay2/b8d8db1a2aa7e68bc30cc8784920412f67de75d287f26f41df14fe77e9cd01aa/diff:/var/lib/docker/overlay2/e05914bc392f34941f075ddf1f08984ba653e41980a3a12b3f0ec7bc66faed2c/diff:/var/lib/docker/overlay2/311290be1e68cfaf248422398f2ee037b662e83e8a80c393cf1f35af46c871e3/diff:/var/lib/docker/overlay2/df5bd86ed0e0c9740e1a389f0dd435837b730043ff123ee198aa28f82511f21c/diff:/var/lib/docker/overlay2/52ebf0baed5a1f4378d84fd6b94c9c1756fef7c7837d0766f77ef29b2104f39c/diff:/var/lib/docker/overlay2/19fdc64f1948c51309fb85761a84e7787a06f91f43e3d554c0507c5291bda802/diff:/var/lib/docker/overlay2/8fbe94a5f5ec7e7b87e3294d917aaba04c0d5a1072a4876ca3171f7b416e78d1/diff:/var/lib/docker/overlay2/3f2aec9d3b6e64c9251bd84e36639908a944a57741768984d5a66f5f34ed2288/diff:/var/lib/docker/overlay2/f4b59ae0a52d88a0325a281689c43f769408e14f70b9c94b771e70af140d7538/diff:/var/lib/docker/overlay2/1b9610
0ef86fc38fba7ce796c89f6a0f0c16c7fc1a94ff1a7f2021a01dd5471e/diff:/var/lib/docker/overlay2/10388fac28961ac0e67628b98602c8c77b04d12b21cd25a8d9adc05c1261252b/diff:/var/lib/docker/overlay2/efcc44ba0e0e6acd23e74db49106211785c2519f22b858252b43b17e927095e4/diff:/var/lib/docker/overlay2/3dbc9b9ec4e689c631fadb88ec67ad1f44f1059642f0d9e9740fa4523681476a/diff:/var/lib/docker/overlay2/196519dbdfaceb51fe77bd0517b4a726c63133e2f1a44cc71539d859f920996f/diff:/var/lib/docker/overlay2/bf269014ddb2ded291bc1f7a299a168567e8ffb016d7d4ba5ad3681eb1c988ba/diff:/var/lib/docker/overlay2/dc3a1c86dd14e2cba77aa4a9424d61aa34efc817c2c14b8f79d66fb1c19132ea/diff:/var/lib/docker/overlay2/7151cfa81a30a8e2bacf0e7e97eb21e825844e9ee99d4d774c443cf4df3042bf/diff:/var/lib/docker/overlay2/4827c7b9079e215b353e51dd539d7e28bf73341ea0a494df4654e9fd1b53d16c/diff:/var/lib/docker/overlay2/2da0eda04306aaeacd19a5cc85b885230b88e74f1791bdb022ebf4b39d85fae2/diff:/var/lib/docker/overlay2/1a0bdf8971fb0a44ff0e168623dbbb2d84b8e93f6d20d9351efd68470e4d4851/diff:/var/lib/d
ocker/overlay2/9ced6f582fc4ce00064f1f5e6ac922d838648fe94afc8e015c309e04004f10ca/diff:/var/lib/docker/overlay2/dd6d4c3166eb565aff2516953d5a8877a204214f2436d414475132ae70429cf7/diff:/var/lib/docker/overlay2/a1ace060e85891d54b26ff4be9fcbce36ffbb15cc4061eb4ccf0add8f82783df/diff:/var/lib/docker/overlay2/bc8b93bfba93e7da2c573ae8b6405ebff526153f6a8b0659aebaf044dc7e8f43/diff:/var/lib/docker/overlay2/c6292624b658b5761ddc277e4b50f1bd9d32fb9a2ad4a01647d6482fa0d29eb3/diff:/var/lib/docker/overlay2/cfe8f35eeb0a80a3c747eac0dfd9195da1fa3d9f92f0b7866d46f3517f3b10ee/diff:/var/lib/docker/overlay2/90bc3b9378b5761de7575f1a82d48e4b4ebf50af153eafa5a565585e136b87f8/diff:/var/lib/docker/overlay2/e1b928a2483870df7fdf4adb3b4002f9effe1db7fbff925b24005f47290b2915/diff:/var/lib/docker/overlay2/4758c75ab63fd3f43ae7b654bc93566ded69acea0e92caf3043ef6eeeec9ca1b/diff:/var/lib/docker/overlay2/8558da5809877030d87982052e5011fb04b8bb9646d0f3c1d4aa10f2d7926592/diff:/var/lib/docker/overlay2/f6400cfae811f55736f55064c8949e18ac2dc1175a5149bb0382a79fa92
4f3f3/diff:/var/lib/docker/overlay2/875d5278ff8445b84261d4586c6a14fbbd9c13beff1fe9252591162e4d91a0dc/diff:/var/lib/docker/overlay2/12d9f85a229e1386e37fb609020fdcb5535c67ce811012fd646182559e4ee754/diff:/var/lib/docker/overlay2/d5c5b85272d7a8b0b62da488c8cba8d883a0adcfc9f1b2f6ad2f856b4f13e5f7/diff:/var/lib/docker/overlay2/5f6feb9e059c22491e287804f38f097fda984dc9376ba28ae810e13dcf27394f/diff:/var/lib/docker/overlay2/113a715b1135f09b959295e960aeaa36846ad54a6fe46fdd53d061bc3fe114e3/diff:/var/lib/docker/overlay2/265096a57a98130b8073aa41d0a22f0ada5a391943e490ac1c281a634e22cba0/diff:/var/lib/docker/overlay2/15f57e6dc9a6b382240a9335aae067454048b2eb6d0094f2d3c8c115be34311a/diff:/var/lib/docker/overlay2/45ca8d44d46a41b4493f52c0048d08b8d5ff4c1b28233ab8940f6422df1f1273/diff:/var/lib/docker/overlay2/d555005a13c251ef928389a1b06d251a17378cf3ec68af5a4d6c849412d3f69f/diff:/var/lib/docker/overlay2/80b8c06850dacfe2ca4db4e0fa4d2d0dd6997cf495a7f7da9b95a996694144c5/diff",
	                "MergedDir": "/var/lib/docker/overlay2/54107ef3bba69957e2904f92616edbbee2b856262d277e35dd0855c862772266/merged",
	                "UpperDir": "/var/lib/docker/overlay2/54107ef3bba69957e2904f92616edbbee2b856262d277e35dd0855c862772266/diff",
	                "WorkDir": "/var/lib/docker/overlay2/54107ef3bba69957e2904f92616edbbee2b856262d277e35dd0855c862772266/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-20220601105850-6708",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-20220601105850-6708/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-20220601105850-6708",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-20220601105850-6708",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-20220601105850-6708",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "1256a9334e29c4a4e5495d8f827d7d7664f9ca7db2fab32facb03db36a3b3af6",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49397"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49396"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49393"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49395"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49394"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/1256a9334e29",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-20220601105850-6708": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.58.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "3b070aceb311",
	                        "old-k8s-version-20220601105850-6708"
	                    ],
	                    "NetworkID": "99443bab5d3fa350d07dfff0b6c1624f2cd2601ac21b76ee77d57de53df02f62",
	                    "EndpointID": "f8f8bbe3bd358574febf4fc32d4b04efab03dd462466478278f465336715a20f",
	                    "Gateway": "192.168.58.1",
	                    "IPAddress": "192.168.58.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:3a:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-20220601105850-6708 -n old-k8s-version-20220601105850-6708
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/DeployApp FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/DeployApp]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-20220601105850-6708 logs -n 25
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/DeployApp logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------|------------------------------------------------|---------|----------------|---------------------|---------------------|
	| Command |                       Args                        |                    Profile                     |  User   |    Version     |     Start Time      |      End Time       |
	|---------|---------------------------------------------------|------------------------------------------------|---------|----------------|---------------------|---------------------|
	| start   | -p bridge-20220601104837-6708                     | bridge-20220601104837-6708                     | jenkins | v1.26.0-beta.1 | 01 Jun 22 10:57 UTC | 01 Jun 22 10:58 UTC |
	|         | --memory=2048                                     |                                                |         |                |                     |                     |
	|         | --alsologtostderr                                 |                                                |         |                |                     |                     |
	|         | --wait=true --wait-timeout=5m                     |                                                |         |                |                     |                     |
	|         | --cni=bridge --driver=docker                      |                                                |         |                |                     |                     |
	|         | --container-runtime=containerd                    |                                                |         |                |                     |                     |
	| ssh     | -p bridge-20220601104837-6708                     | bridge-20220601104837-6708                     | jenkins | v1.26.0-beta.1 | 01 Jun 22 10:58 UTC | 01 Jun 22 10:58 UTC |
	|         | pgrep -a kubelet                                  |                                                |         |                |                     |                     |
	| delete  | -p bridge-20220601104837-6708                     | bridge-20220601104837-6708                     | jenkins | v1.26.0-beta.1 | 01 Jun 22 10:58 UTC | 01 Jun 22 10:58 UTC |
	| start   | -p calico-20220601104839-6708                     | calico-20220601104839-6708                     | jenkins | v1.26.0-beta.1 | 01 Jun 22 10:57 UTC | 01 Jun 22 10:59 UTC |
	|         | --memory=2048                                     |                                                |         |                |                     |                     |
	|         | --alsologtostderr                                 |                                                |         |                |                     |                     |
	|         | --wait=true --wait-timeout=5m                     |                                                |         |                |                     |                     |
	|         | --cni=calico --driver=docker                      |                                                |         |                |                     |                     |
	|         | --container-runtime=containerd                    |                                                |         |                |                     |                     |
	| ssh     | -p calico-20220601104839-6708                     | calico-20220601104839-6708                     | jenkins | v1.26.0-beta.1 | 01 Jun 22 10:59 UTC | 01 Jun 22 10:59 UTC |
	|         | pgrep -a kubelet                                  |                                                |         |                |                     |                     |
	| start   | -p cilium-20220601104839-6708                     | cilium-20220601104839-6708                     | jenkins | v1.26.0-beta.1 | 01 Jun 22 10:58 UTC | 01 Jun 22 10:59 UTC |
	|         | --memory=2048                                     |                                                |         |                |                     |                     |
	|         | --alsologtostderr                                 |                                                |         |                |                     |                     |
	|         | --wait=true --wait-timeout=5m                     |                                                |         |                |                     |                     |
	|         | --cni=cilium --driver=docker                      |                                                |         |                |                     |                     |
	|         | --container-runtime=containerd                    |                                                |         |                |                     |                     |
	| ssh     | -p cilium-20220601104839-6708                     | cilium-20220601104839-6708                     | jenkins | v1.26.0-beta.1 | 01 Jun 22 10:59 UTC | 01 Jun 22 10:59 UTC |
	|         | pgrep -a kubelet                                  |                                                |         |                |                     |                     |
	| delete  | -p cilium-20220601104839-6708                     | cilium-20220601104839-6708                     | jenkins | v1.26.0-beta.1 | 01 Jun 22 10:59 UTC | 01 Jun 22 10:59 UTC |
	| start   | -p                                                | no-preload-20220601105939-6708                 | jenkins | v1.26.0-beta.1 | 01 Jun 22 10:59 UTC | 01 Jun 22 11:00 UTC |
	|         | no-preload-20220601105939-6708                    |                                                |         |                |                     |                     |
	|         | --memory=2200                                     |                                                |         |                |                     |                     |
	|         | --alsologtostderr                                 |                                                |         |                |                     |                     |
	|         | --wait=true --preload=false                       |                                                |         |                |                     |                     |
	|         | --driver=docker                                   |                                                |         |                |                     |                     |
	|         | --container-runtime=containerd                    |                                                |         |                |                     |                     |
	|         | --kubernetes-version=v1.23.6                      |                                                |         |                |                     |                     |
	| addons  | enable metrics-server -p                          | no-preload-20220601105939-6708                 | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:00 UTC | 01 Jun 22 11:00 UTC |
	|         | no-preload-20220601105939-6708                    |                                                |         |                |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                                |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain            |                                                |         |                |                     |                     |
	| stop    | -p                                                | no-preload-20220601105939-6708                 | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:00 UTC | 01 Jun 22 11:01 UTC |
	|         | no-preload-20220601105939-6708                    |                                                |         |                |                     |                     |
	|         | --alsologtostderr -v=3                            |                                                |         |                |                     |                     |
	| addons  | enable dashboard -p                               | no-preload-20220601105939-6708                 | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:01 UTC | 01 Jun 22 11:01 UTC |
	|         | no-preload-20220601105939-6708                    |                                                |         |                |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                                |         |                |                     |                     |
	| logs    | auto-20220601104837-6708 logs                     | auto-20220601104837-6708                       | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:03 UTC | 01 Jun 22 11:03 UTC |
	|         | -n 25                                             |                                                |         |                |                     |                     |
	| delete  | -p auto-20220601104837-6708                       | auto-20220601104837-6708                       | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:03 UTC | 01 Jun 22 11:03 UTC |
	| logs    | old-k8s-version-20220601105850-6708               | old-k8s-version-20220601105850-6708            | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:03 UTC | 01 Jun 22 11:03 UTC |
	|         | logs -n 25                                        |                                                |         |                |                     |                     |
	| start   | -p                                                | no-preload-20220601105939-6708                 | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:01 UTC | 01 Jun 22 11:06 UTC |
	|         | no-preload-20220601105939-6708                    |                                                |         |                |                     |                     |
	|         | --memory=2200                                     |                                                |         |                |                     |                     |
	|         | --alsologtostderr                                 |                                                |         |                |                     |                     |
	|         | --wait=true --preload=false                       |                                                |         |                |                     |                     |
	|         | --driver=docker                                   |                                                |         |                |                     |                     |
	|         | --container-runtime=containerd                    |                                                |         |                |                     |                     |
	|         | --kubernetes-version=v1.23.6                      |                                                |         |                |                     |                     |
	| ssh     | -p                                                | no-preload-20220601105939-6708                 | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:06 UTC | 01 Jun 22 11:06 UTC |
	|         | no-preload-20220601105939-6708                    |                                                |         |                |                     |                     |
	|         | sudo crictl images -o json                        |                                                |         |                |                     |                     |
	| pause   | -p                                                | no-preload-20220601105939-6708                 | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:06 UTC | 01 Jun 22 11:06 UTC |
	|         | no-preload-20220601105939-6708                    |                                                |         |                |                     |                     |
	|         | --alsologtostderr -v=1                            |                                                |         |                |                     |                     |
	| unpause | -p                                                | no-preload-20220601105939-6708                 | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:06 UTC | 01 Jun 22 11:06 UTC |
	|         | no-preload-20220601105939-6708                    |                                                |         |                |                     |                     |
	|         | --alsologtostderr -v=1                            |                                                |         |                |                     |                     |
	| delete  | -p                                                | no-preload-20220601105939-6708                 | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:06 UTC | 01 Jun 22 11:06 UTC |
	|         | no-preload-20220601105939-6708                    |                                                |         |                |                     |                     |
	| delete  | -p                                                | no-preload-20220601105939-6708                 | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:06 UTC | 01 Jun 22 11:06 UTC |
	|         | no-preload-20220601105939-6708                    |                                                |         |                |                     |                     |
	| delete  | -p                                                | disable-driver-mounts-20220601110654-6708      | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:06 UTC | 01 Jun 22 11:06 UTC |
	|         | disable-driver-mounts-20220601110654-6708         |                                                |         |                |                     |                     |
	| logs    | embed-certs-20220601110327-6708                   | embed-certs-20220601110327-6708                | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:08 UTC | 01 Jun 22 11:08 UTC |
	|         | logs -n 25                                        |                                                |         |                |                     |                     |
	| logs    | default-k8s-different-port-20220601110654-6708    | default-k8s-different-port-20220601110654-6708 | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:11 UTC | 01 Jun 22 11:11 UTC |
	|         | logs -n 25                                        |                                                |         |                |                     |                     |
	| logs    | old-k8s-version-20220601105850-6708               | old-k8s-version-20220601105850-6708            | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:11 UTC | 01 Jun 22 11:11 UTC |
	|         | logs -n 25                                        |                                                |         |                |                     |                     |
	|---------|---------------------------------------------------|------------------------------------------------|---------|----------------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/06/01 11:06:54
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.18.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0601 11:06:54.667302  244383 out.go:296] Setting OutFile to fd 1 ...
	I0601 11:06:54.667430  244383 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0601 11:06:54.667448  244383 out.go:309] Setting ErrFile to fd 2...
	I0601 11:06:54.667455  244383 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0601 11:06:54.667611  244383 root.go:322] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/bin
	I0601 11:06:54.668037  244383 out.go:303] Setting JSON to false
	I0601 11:06:54.669846  244383 start.go:115] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":2969,"bootTime":1654078646,"procs":645,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.13.0-1027-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0601 11:06:54.669914  244383 start.go:125] virtualization: kvm guest
	I0601 11:06:54.672039  244383 out.go:177] * [default-k8s-different-port-20220601110654-6708] minikube v1.26.0-beta.1 on Ubuntu 20.04 (kvm/amd64)
	I0601 11:06:54.673519  244383 out.go:177]   - MINIKUBE_LOCATION=14079
	I0601 11:06:54.673532  244383 notify.go:193] Checking for updates...
	I0601 11:06:54.676498  244383 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0601 11:06:54.678066  244383 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig
	I0601 11:06:54.679578  244383 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube
	I0601 11:06:54.681049  244383 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0601 11:06:54.682891  244383 config.go:178] Loaded profile config "calico-20220601104839-6708": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.6
	I0601 11:06:54.683008  244383 config.go:178] Loaded profile config "embed-certs-20220601110327-6708": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.6
	I0601 11:06:54.683105  244383 config.go:178] Loaded profile config "old-k8s-version-20220601105850-6708": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.16.0
	I0601 11:06:54.683158  244383 driver.go:358] Setting default libvirt URI to qemu:///system
	I0601 11:06:54.724298  244383 docker.go:137] docker version: linux-20.10.16
	I0601 11:06:54.724374  244383 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0601 11:06:54.826819  244383 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:76 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:48 OomKillDisable:true NGoroutines:49 SystemTime:2022-06-01 11:06:54.7540349 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1027-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexServe
rAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662795776 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:20.10.16 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] ClientI
nfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0601 11:06:54.826932  244383 docker.go:254] overlay module found
	I0601 11:06:54.829003  244383 out.go:177] * Using the docker driver based on user configuration
	I0601 11:06:54.830315  244383 start.go:284] selected driver: docker
	I0601 11:06:54.830327  244383 start.go:806] validating driver "docker" against <nil>
	I0601 11:06:54.830352  244383 start.go:817] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0601 11:06:54.831265  244383 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0601 11:06:54.931062  244383 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:76 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:48 OomKillDisable:true NGoroutines:49 SystemTime:2022-06-01 11:06:54.859997014 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1027-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662795776 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:20.10.16 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0601 11:06:54.931188  244383 start_flags.go:292] no existing cluster config was found, will generate one from the flags 
	I0601 11:06:54.931414  244383 start_flags.go:847] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0601 11:06:54.933788  244383 out.go:177] * Using Docker driver with the root privilege
	I0601 11:06:54.935205  244383 cni.go:95] Creating CNI manager for ""
	I0601 11:06:54.935218  244383 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0601 11:06:54.935233  244383 cni.go:225] auto-setting extra-config to "kubelet.cni-conf-dir=/etc/cni/net.mk"
	I0601 11:06:54.935238  244383 cni.go:230] extra-config set to "kubelet.cni-conf-dir=/etc/cni/net.mk"
	I0601 11:06:54.935243  244383 start_flags.go:301] Found "CNI" CNI - setting NetworkPlugin=cni
	I0601 11:06:54.935250  244383 start_flags.go:306] config:
	{Name:default-k8s-different-port-20220601110654-6708 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:default-k8s-different-port-20220601110654-6708 Namespace:default APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0601 11:06:54.936846  244383 out.go:177] * Starting control plane node default-k8s-different-port-20220601110654-6708 in cluster default-k8s-different-port-20220601110654-6708
	I0601 11:06:54.938038  244383 cache.go:120] Beginning downloading kic base image for docker with containerd
	I0601 11:06:54.939519  244383 out.go:177] * Pulling base image ...
	I0601 11:06:54.940856  244383 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime containerd
	I0601 11:06:54.940881  244383 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a in local docker daemon
	I0601 11:06:54.940905  244383 preload.go:148] Found local preload: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.6-containerd-overlay2-amd64.tar.lz4
	I0601 11:06:54.940928  244383 cache.go:57] Caching tarball of preloaded images
	I0601 11:06:54.941154  244383 preload.go:174] Found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.6-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0601 11:06:54.941186  244383 cache.go:60] Finished verifying existence of preloaded tar for  v1.23.6 on containerd
	I0601 11:06:54.941308  244383 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/default-k8s-different-port-20220601110654-6708/config.json ...
	I0601 11:06:54.941333  244383 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/default-k8s-different-port-20220601110654-6708/config.json: {Name:mk8b3d87cba3844f82b835b906c4fc7fcf103163 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0601 11:06:54.986323  244383 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a in local docker daemon, skipping pull
	I0601 11:06:54.986351  244383 cache.go:141] gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a exists in daemon, skipping load
	I0601 11:06:54.986370  244383 cache.go:206] Successfully downloaded all kic artifacts
	I0601 11:06:54.986406  244383 start.go:352] acquiring machines lock for default-k8s-different-port-20220601110654-6708: {Name:mk7500f636009412c286b3a5b3a2182fb6b229b2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0601 11:06:54.986553  244383 start.go:356] acquired machines lock for "default-k8s-different-port-20220601110654-6708" in 123.17µs
	I0601 11:06:54.986588  244383 start.go:91] Provisioning new machine with config: &{Name:default-k8s-different-port-20220601110654-6708 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:default-k8s-different-por
t-20220601110654-6708 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.23.6 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:do
cker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false} &{Name: IP: Port:8444 KubernetesVersion:v1.23.6 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0601 11:06:54.986696  244383 start.go:131] createHost starting for "" (driver="docker")
	I0601 11:06:54.668423  232046 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:06:57.168205  232046 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:06:54.989283  244383 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0601 11:06:54.989495  244383 start.go:165] libmachine.API.Create for "default-k8s-different-port-20220601110654-6708" (driver="docker")
	I0601 11:06:54.989523  244383 client.go:168] LocalClient.Create starting
	I0601 11:06:54.989576  244383 main.go:134] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca.pem
	I0601 11:06:54.989602  244383 main.go:134] libmachine: Decoding PEM data...
	I0601 11:06:54.989620  244383 main.go:134] libmachine: Parsing certificate...
	I0601 11:06:54.989670  244383 main.go:134] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/cert.pem
	I0601 11:06:54.989686  244383 main.go:134] libmachine: Decoding PEM data...
	I0601 11:06:54.989697  244383 main.go:134] libmachine: Parsing certificate...
	I0601 11:06:54.990003  244383 cli_runner.go:164] Run: docker network inspect default-k8s-different-port-20220601110654-6708 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0601 11:06:55.021531  244383 cli_runner.go:211] docker network inspect default-k8s-different-port-20220601110654-6708 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0601 11:06:55.021592  244383 network_create.go:272] running [docker network inspect default-k8s-different-port-20220601110654-6708] to gather additional debugging logs...
	I0601 11:06:55.021618  244383 cli_runner.go:164] Run: docker network inspect default-k8s-different-port-20220601110654-6708
	W0601 11:06:55.051948  244383 cli_runner.go:211] docker network inspect default-k8s-different-port-20220601110654-6708 returned with exit code 1
	I0601 11:06:55.051984  244383 network_create.go:275] error running [docker network inspect default-k8s-different-port-20220601110654-6708]: docker network inspect default-k8s-different-port-20220601110654-6708: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: default-k8s-different-port-20220601110654-6708
	I0601 11:06:55.052003  244383 network_create.go:277] output of [docker network inspect default-k8s-different-port-20220601110654-6708]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: default-k8s-different-port-20220601110654-6708
	
	** /stderr **
	I0601 11:06:55.052049  244383 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0601 11:06:55.083654  244383 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc0001322e0] misses:0}
	I0601 11:06:55.083702  244383 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0601 11:06:55.083718  244383 network_create.go:115] attempt to create docker network default-k8s-different-port-20220601110654-6708 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0601 11:06:55.083760  244383 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true default-k8s-different-port-20220601110654-6708
	I0601 11:06:55.150185  244383 network_create.go:99] docker network default-k8s-different-port-20220601110654-6708 192.168.49.0/24 created
	I0601 11:06:55.150232  244383 kic.go:106] calculated static IP "192.168.49.2" for the "default-k8s-different-port-20220601110654-6708" container
	I0601 11:06:55.150301  244383 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0601 11:06:55.185029  244383 cli_runner.go:164] Run: docker volume create default-k8s-different-port-20220601110654-6708 --label name.minikube.sigs.k8s.io=default-k8s-different-port-20220601110654-6708 --label created_by.minikube.sigs.k8s.io=true
	I0601 11:06:55.218896  244383 oci.go:103] Successfully created a docker volume default-k8s-different-port-20220601110654-6708
	I0601 11:06:55.218982  244383 cli_runner.go:164] Run: docker run --rm --name default-k8s-different-port-20220601110654-6708-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-different-port-20220601110654-6708 --entrypoint /usr/bin/test -v default-k8s-different-port-20220601110654-6708:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a -d /var/lib
	I0601 11:06:55.773802  244383 oci.go:107] Successfully prepared a docker volume default-k8s-different-port-20220601110654-6708
	I0601 11:06:55.773849  244383 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime containerd
	I0601 11:06:55.773871  244383 kic.go:179] Starting extracting preloaded images to volume ...
	I0601 11:06:55.773932  244383 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.6-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v default-k8s-different-port-20220601110654-6708:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a -I lz4 -xf /preloaded.tar -C /extractDir
	I0601 11:06:59.334049  232046 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:07:01.667968  232046 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:07:03.152484  244383 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.6-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v default-k8s-different-port-20220601110654-6708:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a -I lz4 -xf /preloaded.tar -C /extractDir: (7.378487132s)
	I0601 11:07:03.152523  244383 kic.go:188] duration metric: took 7.378645 seconds to extract preloaded images to volume
	W0601 11:07:03.152655  244383 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0601 11:07:03.152754  244383 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0601 11:07:03.258344  244383 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname default-k8s-different-port-20220601110654-6708 --name default-k8s-different-port-20220601110654-6708 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-different-port-20220601110654-6708 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=default-k8s-different-port-20220601110654-6708 --network default-k8s-different-port-20220601110654-6708 --ip 192.168.49.2 --volume default-k8s-different-port-20220601110654-6708:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8444 --publish=127.0.0.1::8444 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b3
2c52a
	I0601 11:07:03.640637  244383 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220601110654-6708 --format={{.State.Running}}
	I0601 11:07:03.675247  244383 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220601110654-6708 --format={{.State.Status}}
	I0601 11:07:03.707758  244383 cli_runner.go:164] Run: docker exec default-k8s-different-port-20220601110654-6708 stat /var/lib/dpkg/alternatives/iptables
	I0601 11:07:03.767985  244383 oci.go:247] the created container "default-k8s-different-port-20220601110654-6708" has a running status.
	I0601 11:07:03.768013  244383 kic.go:210] Creating ssh key for kic: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/default-k8s-different-port-20220601110654-6708/id_rsa...
	I0601 11:07:03.823786  244383 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/default-k8s-different-port-20220601110654-6708/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0601 11:07:03.917787  244383 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220601110654-6708 --format={{.State.Status}}
	I0601 11:07:03.956706  244383 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0601 11:07:03.956735  244383 kic_runner.go:114] Args: [docker exec --privileged default-k8s-different-port-20220601110654-6708 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0601 11:07:04.044516  244383 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220601110654-6708 --format={{.State.Status}}
	I0601 11:07:04.081442  244383 machine.go:88] provisioning docker machine ...
	I0601 11:07:04.081477  244383 ubuntu.go:169] provisioning hostname "default-k8s-different-port-20220601110654-6708"
	I0601 11:07:04.081535  244383 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220601110654-6708
	I0601 11:07:04.119200  244383 main.go:134] libmachine: Using SSH client type: native
	I0601 11:07:04.119405  244383 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7da240] 0x7dd2a0 <nil>  [] 0s} 127.0.0.1 49417 <nil> <nil>}
	I0601 11:07:04.119425  244383 main.go:134] libmachine: About to run SSH command:
	sudo hostname default-k8s-different-port-20220601110654-6708 && echo "default-k8s-different-port-20220601110654-6708" | sudo tee /etc/hostname
	I0601 11:07:04.249668  244383 main.go:134] libmachine: SSH cmd err, output: <nil>: default-k8s-different-port-20220601110654-6708
	
	I0601 11:07:04.249734  244383 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220601110654-6708
	I0601 11:07:04.283443  244383 main.go:134] libmachine: Using SSH client type: native
	I0601 11:07:04.283593  244383 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7da240] 0x7dd2a0 <nil>  [] 0s} 127.0.0.1 49417 <nil> <nil>}
	I0601 11:07:04.283628  244383 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-different-port-20220601110654-6708' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-different-port-20220601110654-6708/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-different-port-20220601110654-6708' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0601 11:07:04.395587  244383 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0601 11:07:04.395617  244383 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/c
erts/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube}
	I0601 11:07:04.395643  244383 ubuntu.go:177] setting up certificates
	I0601 11:07:04.395652  244383 provision.go:83] configureAuth start
	I0601 11:07:04.395697  244383 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-different-port-20220601110654-6708
	I0601 11:07:04.427413  244383 provision.go:138] copyHostCerts
	I0601 11:07:04.427469  244383 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/key.pem, removing ...
	I0601 11:07:04.427481  244383 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/key.pem
	I0601 11:07:04.427543  244383 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/key.pem (1679 bytes)
	I0601 11:07:04.427622  244383 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.pem, removing ...
	I0601 11:07:04.427632  244383 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.pem
	I0601 11:07:04.427659  244383 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.pem (1082 bytes)
	I0601 11:07:04.427708  244383 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cert.pem, removing ...
	I0601 11:07:04.427721  244383 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cert.pem
	I0601 11:07:04.427753  244383 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cert.pem (1123 bytes)
	I0601 11:07:04.427802  244383 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca-key.pem org=jenkins.default-k8s-different-port-20220601110654-6708 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube default-k8s-different-port-20220601110654-6708]
	I0601 11:07:04.535631  244383 provision.go:172] copyRemoteCerts
	I0601 11:07:04.535685  244383 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0601 11:07:04.535726  244383 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220601110654-6708
	I0601 11:07:04.568780  244383 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49417 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/default-k8s-different-port-20220601110654-6708/id_rsa Username:docker}
	I0601 11:07:04.659152  244383 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0601 11:07:04.676610  244383 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/server.pem --> /etc/docker/server.pem (1306 bytes)
	I0601 11:07:04.694731  244383 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0601 11:07:04.711549  244383 provision.go:86] duration metric: configureAuth took 315.887909ms
	I0601 11:07:04.711573  244383 ubuntu.go:193] setting minikube options for container-runtime
	I0601 11:07:04.711735  244383 config.go:178] Loaded profile config "default-k8s-different-port-20220601110654-6708": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.6
	I0601 11:07:04.711748  244383 machine.go:91] provisioned docker machine in 630.288068ms
	I0601 11:07:04.711754  244383 client.go:171] LocalClient.Create took 9.722222745s
	I0601 11:07:04.711778  244383 start.go:173] duration metric: libmachine.API.Create for "default-k8s-different-port-20220601110654-6708" took 9.722275215s
	I0601 11:07:04.711793  244383 start.go:306] post-start starting for "default-k8s-different-port-20220601110654-6708" (driver="docker")
	I0601 11:07:04.711800  244383 start.go:316] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0601 11:07:04.711844  244383 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0601 11:07:04.711903  244383 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220601110654-6708
	I0601 11:07:04.745536  244383 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49417 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/default-k8s-different-port-20220601110654-6708/id_rsa Username:docker}
	I0601 11:07:04.831037  244383 ssh_runner.go:195] Run: cat /etc/os-release
	I0601 11:07:04.833655  244383 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0601 11:07:04.833679  244383 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0601 11:07:04.833703  244383 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0601 11:07:04.833716  244383 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0601 11:07:04.833726  244383 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/addons for local assets ...
	I0601 11:07:04.833775  244383 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files for local assets ...
	I0601 11:07:04.833870  244383 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files/etc/ssl/certs/67082.pem -> 67082.pem in /etc/ssl/certs
	I0601 11:07:04.833975  244383 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0601 11:07:04.840420  244383 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files/etc/ssl/certs/67082.pem --> /etc/ssl/certs/67082.pem (1708 bytes)
	I0601 11:07:04.857187  244383 start.go:309] post-start completed in 145.384397ms
	I0601 11:07:04.857493  244383 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-different-port-20220601110654-6708
	I0601 11:07:04.888747  244383 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/default-k8s-different-port-20220601110654-6708/config.json ...
	I0601 11:07:04.888963  244383 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0601 11:07:04.889000  244383 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220601110654-6708
	I0601 11:07:04.919352  244383 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49417 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/default-k8s-different-port-20220601110654-6708/id_rsa Username:docker}
	I0601 11:07:05.000243  244383 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0601 11:07:05.004050  244383 start.go:134] duration metric: createHost completed in 10.017341223s
	I0601 11:07:05.004075  244383 start.go:81] releasing machines lock for "default-k8s-different-port-20220601110654-6708", held for 10.017502791s
	I0601 11:07:05.004171  244383 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-different-port-20220601110654-6708
	I0601 11:07:05.035905  244383 ssh_runner.go:195] Run: systemctl --version
	I0601 11:07:05.035960  244383 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220601110654-6708
	I0601 11:07:05.035972  244383 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0601 11:07:05.036031  244383 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220601110654-6708
	I0601 11:07:05.069327  244383 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49417 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/default-k8s-different-port-20220601110654-6708/id_rsa Username:docker}
	I0601 11:07:05.070632  244383 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49417 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/default-k8s-different-port-20220601110654-6708/id_rsa Username:docker}
	I0601 11:07:05.175990  244383 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0601 11:07:05.186279  244383 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0601 11:07:05.194913  244383 docker.go:187] disabling docker service ...
	I0601 11:07:05.194953  244383 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0601 11:07:05.211132  244383 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0601 11:07:05.219763  244383 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0601 11:07:05.302855  244383 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0601 11:07:05.379942  244383 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0601 11:07:05.388684  244383 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0601 11:07:05.401125  244383 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*sandbox_image = .*$|sandbox_image = "k8s.gcr.io/pause:3.6"|' -i /etc/containerd/config.toml"
	I0601 11:07:05.408798  244383 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*restrict_oom_score_adj = .*$|restrict_oom_score_adj = false|' -i /etc/containerd/config.toml"
	I0601 11:07:05.416626  244383 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*SystemdCgroup = .*$|SystemdCgroup = false|' -i /etc/containerd/config.toml"
	I0601 11:07:05.424218  244383 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*conf_dir = .*$|conf_dir = "/etc/cni/net.mk"|' -i /etc/containerd/config.toml"
	I0601 11:07:05.431786  244383 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^# imports|imports = ["/etc/containerd/containerd.conf.d/02-containerd.conf"]|' -i /etc/containerd/config.toml"
	I0601 11:07:05.439234  244383 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc/containerd/containerd.conf.d && printf %!s(MISSING) "dmVyc2lvbiA9IDIK" | base64 -d | sudo tee /etc/containerd/containerd.conf.d/02-containerd.conf"
	I0601 11:07:05.451481  244383 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0601 11:07:05.457796  244383 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0601 11:07:05.464201  244383 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0601 11:07:05.540478  244383 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0601 11:07:05.650499  244383 start.go:447] Will wait 60s for socket path /run/containerd/containerd.sock
	I0601 11:07:05.650567  244383 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0601 11:07:05.654052  244383 start.go:468] Will wait 60s for crictl version
	I0601 11:07:05.654103  244383 ssh_runner.go:195] Run: sudo crictl version
	I0601 11:07:05.681128  244383 start.go:477] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.6.4
	RuntimeApiVersion:  v1alpha2
	I0601 11:07:05.681188  244383 ssh_runner.go:195] Run: containerd --version
	I0601 11:07:05.710828  244383 ssh_runner.go:195] Run: containerd --version
	I0601 11:07:05.741779  244383 out.go:177] * Preparing Kubernetes v1.23.6 on containerd 1.6.4 ...
	I0601 11:07:05.743207  244383 cli_runner.go:164] Run: docker network inspect default-k8s-different-port-20220601110654-6708 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0601 11:07:05.773719  244383 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0601 11:07:05.777293  244383 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0601 11:07:05.788623  244383 out.go:177]   - kubelet.cni-conf-dir=/etc/cni/net.mk
	I0601 11:07:05.790049  244383 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime containerd
	I0601 11:07:05.790117  244383 ssh_runner.go:195] Run: sudo crictl images --output json
	I0601 11:07:05.812809  244383 containerd.go:547] all images are preloaded for containerd runtime.
	I0601 11:07:05.812831  244383 containerd.go:461] Images already preloaded, skipping extraction
	I0601 11:07:05.812869  244383 ssh_runner.go:195] Run: sudo crictl images --output json
	I0601 11:07:05.834860  244383 containerd.go:547] all images are preloaded for containerd runtime.
	I0601 11:07:05.834879  244383 cache_images.go:84] Images are preloaded, skipping loading
	I0601 11:07:05.834947  244383 ssh_runner.go:195] Run: sudo crictl info
	I0601 11:07:05.857173  244383 cni.go:95] Creating CNI manager for ""
	I0601 11:07:05.857192  244383 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0601 11:07:05.857218  244383 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0601 11:07:05.857235  244383 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8444 KubernetesVersion:v1.23.6 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-different-port-20220601110654-6708 NodeName:default-k8s-different-port-20220601110654-6708 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.49.2
CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0601 11:07:05.857383  244383 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "default-k8s-different-port-20220601110654-6708"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.23.6
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0601 11:07:05.857471  244383 kubeadm.go:961] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.23.6/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cni-conf-dir=/etc/cni/net.mk --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=default-k8s-different-port-20220601110654-6708 --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.49.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.23.6 ClusterName:default-k8s-different-port-20220601110654-6708 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:}
	I0601 11:07:05.857530  244383 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.23.6
	I0601 11:07:05.864412  244383 binaries.go:44] Found k8s binaries, skipping transfer
	I0601 11:07:05.864485  244383 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0601 11:07:05.870921  244383 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (591 bytes)
	I0601 11:07:05.883133  244383 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0601 11:07:05.896240  244383 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2075 bytes)
	I0601 11:07:05.908996  244383 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0601 11:07:05.911816  244383 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0601 11:07:05.920740  244383 certs.go:54] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/default-k8s-different-port-20220601110654-6708 for IP: 192.168.49.2
	I0601 11:07:05.920863  244383 certs.go:182] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.key
	I0601 11:07:05.920906  244383 certs.go:182] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/proxy-client-ca.key
	I0601 11:07:05.920964  244383 certs.go:302] generating minikube-user signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/default-k8s-different-port-20220601110654-6708/client.key
	I0601 11:07:05.920984  244383 crypto.go:68] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/default-k8s-different-port-20220601110654-6708/client.crt with IP's: []
	I0601 11:07:06.190511  244383 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/default-k8s-different-port-20220601110654-6708/client.crt ...
	I0601 11:07:06.190541  244383 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/default-k8s-different-port-20220601110654-6708/client.crt: {Name:mk1f0de9f338c1565864d345295f211cd6b42042 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0601 11:07:06.190751  244383 crypto.go:164] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/default-k8s-different-port-20220601110654-6708/client.key ...
	I0601 11:07:06.190766  244383 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/default-k8s-different-port-20220601110654-6708/client.key: {Name:mk3abd1ec1bc2a3303283efb1d56bffeb558d491 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0601 11:07:06.190855  244383 certs.go:302] generating minikube signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/default-k8s-different-port-20220601110654-6708/apiserver.key.dd3b5fb2
	I0601 11:07:06.190870  244383 crypto.go:68] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/default-k8s-different-port-20220601110654-6708/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0601 11:07:06.411949  244383 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/default-k8s-different-port-20220601110654-6708/apiserver.crt.dd3b5fb2 ...
	I0601 11:07:06.411982  244383 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/default-k8s-different-port-20220601110654-6708/apiserver.crt.dd3b5fb2: {Name:mk21c89d2fdd1fdc207dd136def37f5d90a62bd7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0601 11:07:06.412202  244383 crypto.go:164] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/default-k8s-different-port-20220601110654-6708/apiserver.key.dd3b5fb2 ...
	I0601 11:07:06.412221  244383 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/default-k8s-different-port-20220601110654-6708/apiserver.key.dd3b5fb2: {Name:mk2f4aae6eb49e6251c3e6c8e6f0f6462f382896 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0601 11:07:06.412314  244383 certs.go:320] copying /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/default-k8s-different-port-20220601110654-6708/apiserver.crt.dd3b5fb2 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/default-k8s-different-port-20220601110654-6708/apiserver.crt
	I0601 11:07:06.412369  244383 certs.go:324] copying /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/default-k8s-different-port-20220601110654-6708/apiserver.key.dd3b5fb2 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/default-k8s-different-port-20220601110654-6708/apiserver.key
	I0601 11:07:06.412451  244383 certs.go:302] generating aggregator signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/default-k8s-different-port-20220601110654-6708/proxy-client.key
	I0601 11:07:06.412469  244383 crypto.go:68] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/default-k8s-different-port-20220601110654-6708/proxy-client.crt with IP's: []
	I0601 11:07:06.545552  244383 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/default-k8s-different-port-20220601110654-6708/proxy-client.crt ...
	I0601 11:07:06.545619  244383 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/default-k8s-different-port-20220601110654-6708/proxy-client.crt: {Name:mkee564e3149cd8be755ca3cbe99f47feac8e4a7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0601 11:07:06.545807  244383 crypto.go:164] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/default-k8s-different-port-20220601110654-6708/proxy-client.key ...
	I0601 11:07:06.545819  244383 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/default-k8s-different-port-20220601110654-6708/proxy-client.key: {Name:mk3354416a46b334b24512eafd987800637af3d8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0601 11:07:06.547104  244383 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/6708.pem (1338 bytes)
	W0601 11:07:06.547148  244383 certs.go:384] ignoring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/6708_empty.pem, impossibly tiny 0 bytes
	I0601 11:07:06.547174  244383 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca-key.pem (1679 bytes)
	I0601 11:07:06.547194  244383 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca.pem (1082 bytes)
	I0601 11:07:06.547234  244383 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/cert.pem (1123 bytes)
	I0601 11:07:06.547271  244383 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/key.pem (1679 bytes)
	I0601 11:07:06.547327  244383 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files/etc/ssl/certs/67082.pem (1708 bytes)
	I0601 11:07:06.547961  244383 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/default-k8s-different-port-20220601110654-6708/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0601 11:07:06.565921  244383 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/default-k8s-different-port-20220601110654-6708/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0601 11:07:06.584089  244383 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/default-k8s-different-port-20220601110654-6708/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0601 11:07:06.601191  244383 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/default-k8s-different-port-20220601110654-6708/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0601 11:07:06.618465  244383 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0601 11:07:06.635815  244383 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0601 11:07:06.653212  244383 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0601 11:07:06.670886  244383 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0601 11:07:06.687801  244383 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/6708.pem --> /usr/share/ca-certificates/6708.pem (1338 bytes)
	I0601 11:07:06.704953  244383 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files/etc/ssl/certs/67082.pem --> /usr/share/ca-certificates/67082.pem (1708 bytes)
	I0601 11:07:06.721444  244383 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0601 11:07:06.737875  244383 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0601 11:07:06.751738  244383 ssh_runner.go:195] Run: openssl version
	I0601 11:07:06.756719  244383 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/67082.pem && ln -fs /usr/share/ca-certificates/67082.pem /etc/ssl/certs/67082.pem"
	I0601 11:07:06.764146  244383 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/67082.pem
	I0601 11:07:06.767163  244383 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Jun  1 10:24 /usr/share/ca-certificates/67082.pem
	I0601 11:07:06.767216  244383 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/67082.pem
	I0601 11:07:06.771914  244383 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/67082.pem /etc/ssl/certs/3ec20f2e.0"
	I0601 11:07:06.778934  244383 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0601 11:07:06.786568  244383 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0601 11:07:06.789545  244383 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Jun  1 10:20 /usr/share/ca-certificates/minikubeCA.pem
	I0601 11:07:06.789607  244383 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0601 11:07:06.794248  244383 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0601 11:07:06.801364  244383 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6708.pem && ln -fs /usr/share/ca-certificates/6708.pem /etc/ssl/certs/6708.pem"
	I0601 11:07:06.808247  244383 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6708.pem
	I0601 11:07:06.811196  244383 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Jun  1 10:24 /usr/share/ca-certificates/6708.pem
	I0601 11:07:06.811252  244383 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6708.pem
	I0601 11:07:06.816241  244383 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6708.pem /etc/ssl/certs/51391683.0"
	I0601 11:07:06.823684  244383 kubeadm.go:395] StartCluster: {Name:default-k8s-different-port-20220601110654-6708 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:default-k8s-different-port-20220601110654-6708
Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8444 KubernetesVersion:v1.23.6 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mount
IP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0601 11:07:06.823768  244383 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0601 11:07:06.823809  244383 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0601 11:07:06.847418  244383 cri.go:87] found id: ""
	I0601 11:07:06.847481  244383 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0601 11:07:06.854612  244383 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0601 11:07:06.861596  244383 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0601 11:07:06.861652  244383 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0601 11:07:06.868516  244383 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0601 11:07:06.868568  244383 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0601 11:07:03.668636  232046 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:07:06.167338  232046 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:07:07.121183  244383 out.go:204]   - Generating certificates and keys ...
	I0601 11:07:09.218861  244383 out.go:204]   - Booting up control plane ...
	I0601 11:07:08.167714  232046 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:07:10.168162  232046 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:07:12.667278  232046 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:07:14.668246  232046 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:07:17.168197  232046 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:07:21.259795  244383 out.go:204]   - Configuring RBAC rules ...
	I0601 11:07:21.672636  244383 cni.go:95] Creating CNI manager for ""
	I0601 11:07:21.672654  244383 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0601 11:07:21.674533  244383 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0601 11:07:19.668390  232046 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:07:21.668490  232046 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:07:21.675845  244383 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0601 11:07:21.679515  244383 cni.go:189] applying CNI manifest using /var/lib/minikube/binaries/v1.23.6/kubectl ...
	I0601 11:07:21.679534  244383 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0601 11:07:21.692464  244383 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0601 11:07:22.465311  244383 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0601 11:07:22.465382  244383 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:07:22.465395  244383 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl label nodes minikube.k8s.io/version=v1.26.0-beta.1 minikube.k8s.io/commit=4a356b2b7b41c6be3e1e342298908c27bb98ce92 minikube.k8s.io/name=default-k8s-different-port-20220601110654-6708 minikube.k8s.io/updated_at=2022_06_01T11_07_22_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:07:22.521244  244383 ops.go:34] apiserver oom_adj: -16
	I0601 11:07:22.521263  244383 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:07:23.109047  244383 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:07:23.609743  244383 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:07:24.109036  244383 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:07:24.609779  244383 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:07:24.167646  232046 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:07:26.168090  232046 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:07:25.109823  244383 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:07:25.609061  244383 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:07:26.108863  244383 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:07:26.608780  244383 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:07:27.109061  244383 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:07:27.609116  244383 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:07:28.109699  244383 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:07:28.609047  244383 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:07:29.109170  244383 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:07:29.608851  244383 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:07:28.667871  232046 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:07:30.668198  232046 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:07:30.109055  244383 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:07:30.608852  244383 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:07:31.109521  244383 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:07:31.609057  244383 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:07:32.108853  244383 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:07:32.609531  244383 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:07:33.108838  244383 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:07:33.608822  244383 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:07:34.108973  244383 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:07:34.609839  244383 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:07:34.671502  244383 kubeadm.go:1045] duration metric: took 12.206180961s to wait for elevateKubeSystemPrivileges.
	I0601 11:07:34.671537  244383 kubeadm.go:397] StartCluster complete in 27.847858486s
	I0601 11:07:34.671557  244383 settings.go:142] acquiring lock: {Name:mk20a847233fa50399ab0a24280bffb8d8dbd41b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0601 11:07:34.671645  244383 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig
	I0601 11:07:34.673551  244383 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig: {Name:mkb0e16236e54fbc8651999f1dd70854c53de7db Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0601 11:07:35.189278  244383 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "default-k8s-different-port-20220601110654-6708" rescaled to 1
	I0601 11:07:35.189337  244383 start.go:208] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8444 KubernetesVersion:v1.23.6 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0601 11:07:35.191451  244383 out.go:177] * Verifying Kubernetes components...
	I0601 11:07:35.189391  244383 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0601 11:07:35.189390  244383 addons.go:415] enableAddons start: toEnable=map[], additional=[]
	I0601 11:07:35.189576  244383 config.go:178] Loaded profile config "default-k8s-different-port-20220601110654-6708": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.6
	I0601 11:07:35.192926  244383 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0601 11:07:35.192990  244383 addons.go:65] Setting storage-provisioner=true in profile "default-k8s-different-port-20220601110654-6708"
	I0601 11:07:35.193023  244383 addons.go:65] Setting default-storageclass=true in profile "default-k8s-different-port-20220601110654-6708"
	I0601 11:07:35.193071  244383 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-different-port-20220601110654-6708"
	I0601 11:07:35.193025  244383 addons.go:153] Setting addon storage-provisioner=true in "default-k8s-different-port-20220601110654-6708"
	W0601 11:07:35.193134  244383 addons.go:165] addon storage-provisioner should already be in state true
	I0601 11:07:35.193178  244383 host.go:66] Checking if "default-k8s-different-port-20220601110654-6708" exists ...
	I0601 11:07:35.193498  244383 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220601110654-6708 --format={{.State.Status}}
	I0601 11:07:35.193681  244383 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220601110654-6708 --format={{.State.Status}}
	I0601 11:07:35.209430  244383 node_ready.go:35] waiting up to 6m0s for node "default-k8s-different-port-20220601110654-6708" to be "Ready" ...
	I0601 11:07:35.237918  244383 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0601 11:07:35.239410  244383 addons.go:348] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0601 11:07:35.239425  244383 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0601 11:07:35.239470  244383 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220601110654-6708
	I0601 11:07:35.255735  244383 addons.go:153] Setting addon default-storageclass=true in "default-k8s-different-port-20220601110654-6708"
	W0601 11:07:35.255765  244383 addons.go:165] addon default-storageclass should already be in state true
	I0601 11:07:35.255799  244383 host.go:66] Checking if "default-k8s-different-port-20220601110654-6708" exists ...
	I0601 11:07:35.256352  244383 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220601110654-6708 --format={{.State.Status}}
	I0601 11:07:35.277557  244383 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49417 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/default-k8s-different-port-20220601110654-6708/id_rsa Username:docker}
	I0601 11:07:35.290858  244383 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0601 11:07:35.296059  244383 addons.go:348] installing /etc/kubernetes/addons/storageclass.yaml
	I0601 11:07:35.296086  244383 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0601 11:07:35.296137  244383 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220601110654-6708
	I0601 11:07:35.338006  244383 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49417 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/default-k8s-different-port-20220601110654-6708/id_rsa Username:docker}
	I0601 11:07:35.376722  244383 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0601 11:07:35.468185  244383 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0601 11:07:35.653594  244383 start.go:806] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS
	I0601 11:07:35.783515  244383 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0601 11:07:33.167882  232046 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:07:35.168161  232046 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:07:37.168296  232046 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:07:35.784841  244383 addons.go:417] enableAddons completed in 595.455746ms
	I0601 11:07:37.216016  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:07:39.667840  232046 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:07:42.167654  232046 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:07:39.717025  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:07:42.216640  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:07:44.667876  232046 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:07:47.168006  232046 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:07:44.716894  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:07:47.216117  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:07:49.217067  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:07:49.168183  232046 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:07:51.667932  232046 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:07:51.716491  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:07:54.216277  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:07:54.167913  232046 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:07:56.167953  232046 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:07:56.216761  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:07:58.717105  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:07:58.168275  232046 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:08:00.668037  232046 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:08:01.216388  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:08:03.716389  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:08:03.167969  232046 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:08:05.667837  232046 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:08:08.168013  232046 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:08:08.670567  232046 node_ready.go:38] duration metric: took 4m0.010022239s waiting for node "embed-certs-20220601110327-6708" to be "Ready" ...
	I0601 11:08:08.673338  232046 out.go:177] 
	W0601 11:08:08.675576  232046 out.go:239] X Exiting due to GUEST_START: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: timed out waiting for the condition
	W0601 11:08:08.675599  232046 out.go:239] * 
	W0601 11:08:08.676630  232046 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0601 11:08:08.678476  232046 out.go:177] 
	I0601 11:08:05.717011  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:08:08.215942  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:08:10.216368  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:08:12.216490  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:08:14.716947  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:08:17.216379  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:08:19.216687  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:08:21.216835  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:08:23.717175  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:08:26.216167  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:08:28.216729  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:08:30.216872  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:08:32.716452  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:08:35.216938  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:08:37.716649  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:08:39.716753  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:08:42.215917  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:08:44.216056  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:08:46.216458  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:08:48.216662  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:08:50.716633  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:08:52.716937  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:08:55.216648  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:08:57.716740  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:09:00.217259  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:09:02.716121  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:09:04.716668  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:09:06.716874  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:09:08.717065  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:09:11.216427  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:09:13.716769  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:09:16.216572  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:09:18.715438  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:09:20.716744  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:09:23.216674  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:09:25.716243  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:09:27.716345  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:09:29.716770  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:09:32.217046  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:09:34.716539  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:09:36.716922  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:09:38.717062  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:09:40.717196  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:09:43.216722  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:09:45.716601  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:09:47.716677  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:09:49.718424  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:09:52.216702  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:09:54.716437  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:09:57.216473  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:09:59.216703  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:10:01.716563  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:10:04.216144  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:10:06.216284  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:10:08.716579  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:10:11.216102  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:10:13.216282  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:10:15.716437  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:10:18.216335  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:10:20.715993  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:10:22.716802  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:10:25.216481  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:10:27.216823  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:10:29.716428  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:10:31.716531  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:10:34.216325  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:10:36.216576  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:10:38.216755  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:10:40.716532  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:10:43.216563  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:10:45.716341  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:10:47.716680  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:10:50.216218  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:10:52.716283  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:10:54.716952  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:10:57.216293  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:10:59.216999  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:11:01.716144  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:11:03.716378  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:11:05.716604  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:11:08.216289  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:11:10.216683  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:11:12.716931  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:11:15.216225  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:11:17.216376  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:11:19.716558  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:11:22.216186  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:11:24.216522  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:11:26.717180  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:11:29.216092  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:11:31.216231  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:11:33.716223  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:11:35.218171  244383 node_ready.go:38] duration metric: took 4m0.008704673s waiting for node "default-k8s-different-port-20220601110654-6708" to be "Ready" ...
	I0601 11:11:35.220452  244383 out.go:177] 
	W0601 11:11:35.221885  244383 out.go:239] X Exiting due to GUEST_START: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: timed out waiting for the condition
	W0601 11:11:35.221913  244383 out.go:239] * 
	W0601 11:11:35.222650  244383 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0601 11:11:35.224616  244383 out.go:177] 
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID
	bc21545827165       6de166512aa22       3 minutes ago       Exited              kindnet-cni               3                   0a6a4ae8178de
	01651d3598805       c21b0c7400f98       12 minutes ago      Running             kube-proxy                0                   f1d9aedf42d24
	0b9cf8973c884       b305571ca60a5       12 minutes ago      Running             kube-apiserver            0                   ac769aefe340a
	f18885873e44e       06a629a7e51cd       12 minutes ago      Running             kube-controller-manager   0                   3736e1d98ec61
	92f272874915c       b2756210eeabf       12 minutes ago      Running             etcd                      0                   41c0131fc288d
	e4d08ecd5adee       301ddc62b80b1       12 minutes ago      Running             kube-scheduler            0                   0a47511bd2aec
	
	* 
	* ==> containerd <==
	* -- Logs begin at Wed 2022-06-01 10:59:01 UTC, end at Wed 2022-06-01 11:11:50 UTC. --
	Jun 01 11:05:06 old-k8s-version-20220601105850-6708 containerd[511]: time="2022-06-01T11:05:06.288903624Z" level=warning msg="cleaning up after shim disconnected" id=df2a3875ec723f9785edea449611ef162e14636786905cd989570b375ffed8b4 namespace=k8s.io
	Jun 01 11:05:06 old-k8s-version-20220601105850-6708 containerd[511]: time="2022-06-01T11:05:06.288915658Z" level=info msg="cleaning up dead shim"
	Jun 01 11:05:06 old-k8s-version-20220601105850-6708 containerd[511]: time="2022-06-01T11:05:06.298269936Z" level=warning msg="cleanup warnings time=\"2022-06-01T11:05:06Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3102 runtime=io.containerd.runc.v2\n"
	Jun 01 11:05:07 old-k8s-version-20220601105850-6708 containerd[511]: time="2022-06-01T11:05:07.057612928Z" level=info msg="RemoveContainer for \"8f9cec9f497f70922114b5778ad83667fefb19394aa9a2008cd70a55ebd910b6\""
	Jun 01 11:05:07 old-k8s-version-20220601105850-6708 containerd[511]: time="2022-06-01T11:05:07.062471773Z" level=info msg="RemoveContainer for \"8f9cec9f497f70922114b5778ad83667fefb19394aa9a2008cd70a55ebd910b6\" returns successfully"
	Jun 01 11:05:18 old-k8s-version-20220601105850-6708 containerd[511]: time="2022-06-01T11:05:18.529320245Z" level=info msg="CreateContainer within sandbox \"0a6a4ae8178de388eac34f80c746eb474698ca59fb55ee2a3b96f3fe0be6b4cb\" for container &ContainerMetadata{Name:kindnet-cni,Attempt:2,}"
	Jun 01 11:05:18 old-k8s-version-20220601105850-6708 containerd[511]: time="2022-06-01T11:05:18.542361255Z" level=info msg="CreateContainer within sandbox \"0a6a4ae8178de388eac34f80c746eb474698ca59fb55ee2a3b96f3fe0be6b4cb\" for &ContainerMetadata{Name:kindnet-cni,Attempt:2,} returns container id \"d1efccf6d9e25e29664f8909e91d77c5ed7bdfc202c3a011aa009bb469f6588a\""
	Jun 01 11:05:18 old-k8s-version-20220601105850-6708 containerd[511]: time="2022-06-01T11:05:18.542763739Z" level=info msg="StartContainer for \"d1efccf6d9e25e29664f8909e91d77c5ed7bdfc202c3a011aa009bb469f6588a\""
	Jun 01 11:05:18 old-k8s-version-20220601105850-6708 containerd[511]: time="2022-06-01T11:05:18.566563238Z" level=info msg="RemoveContainer for \"df2a3875ec723f9785edea449611ef162e14636786905cd989570b375ffed8b4\""
	Jun 01 11:05:18 old-k8s-version-20220601105850-6708 containerd[511]: time="2022-06-01T11:05:18.570426793Z" level=info msg="RemoveContainer for \"df2a3875ec723f9785edea449611ef162e14636786905cd989570b375ffed8b4\" returns successfully"
	Jun 01 11:05:18 old-k8s-version-20220601105850-6708 containerd[511]: time="2022-06-01T11:05:18.678510329Z" level=info msg="StartContainer for \"d1efccf6d9e25e29664f8909e91d77c5ed7bdfc202c3a011aa009bb469f6588a\" returns successfully"
	Jun 01 11:07:58 old-k8s-version-20220601105850-6708 containerd[511]: time="2022-06-01T11:07:58.981613200Z" level=info msg="shim disconnected" id=d1efccf6d9e25e29664f8909e91d77c5ed7bdfc202c3a011aa009bb469f6588a
	Jun 01 11:07:58 old-k8s-version-20220601105850-6708 containerd[511]: time="2022-06-01T11:07:58.981683206Z" level=warning msg="cleaning up after shim disconnected" id=d1efccf6d9e25e29664f8909e91d77c5ed7bdfc202c3a011aa009bb469f6588a namespace=k8s.io
	Jun 01 11:07:58 old-k8s-version-20220601105850-6708 containerd[511]: time="2022-06-01T11:07:58.981699329Z" level=info msg="cleaning up dead shim"
	Jun 01 11:07:58 old-k8s-version-20220601105850-6708 containerd[511]: time="2022-06-01T11:07:58.992168635Z" level=warning msg="cleanup warnings time=\"2022-06-01T11:07:58Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3538 runtime=io.containerd.runc.v2\n"
	Jun 01 11:08:27 old-k8s-version-20220601105850-6708 containerd[511]: time="2022-06-01T11:08:27.529080919Z" level=info msg="CreateContainer within sandbox \"0a6a4ae8178de388eac34f80c746eb474698ca59fb55ee2a3b96f3fe0be6b4cb\" for container &ContainerMetadata{Name:kindnet-cni,Attempt:3,}"
	Jun 01 11:08:27 old-k8s-version-20220601105850-6708 containerd[511]: time="2022-06-01T11:08:27.540778103Z" level=info msg="CreateContainer within sandbox \"0a6a4ae8178de388eac34f80c746eb474698ca59fb55ee2a3b96f3fe0be6b4cb\" for &ContainerMetadata{Name:kindnet-cni,Attempt:3,} returns container id \"bc215458271657413ab56e24b8958038bee4a907217ca9bc43a5ecc1e2339443\""
	Jun 01 11:08:27 old-k8s-version-20220601105850-6708 containerd[511]: time="2022-06-01T11:08:27.541190322Z" level=info msg="StartContainer for \"bc215458271657413ab56e24b8958038bee4a907217ca9bc43a5ecc1e2339443\""
	Jun 01 11:08:27 old-k8s-version-20220601105850-6708 containerd[511]: time="2022-06-01T11:08:27.662166581Z" level=info msg="StartContainer for \"bc215458271657413ab56e24b8958038bee4a907217ca9bc43a5ecc1e2339443\" returns successfully"
	Jun 01 11:11:07 old-k8s-version-20220601105850-6708 containerd[511]: time="2022-06-01T11:11:07.892985964Z" level=info msg="shim disconnected" id=bc215458271657413ab56e24b8958038bee4a907217ca9bc43a5ecc1e2339443
	Jun 01 11:11:07 old-k8s-version-20220601105850-6708 containerd[511]: time="2022-06-01T11:11:07.893042710Z" level=warning msg="cleaning up after shim disconnected" id=bc215458271657413ab56e24b8958038bee4a907217ca9bc43a5ecc1e2339443 namespace=k8s.io
	Jun 01 11:11:07 old-k8s-version-20220601105850-6708 containerd[511]: time="2022-06-01T11:11:07.893063061Z" level=info msg="cleaning up dead shim"
	Jun 01 11:11:07 old-k8s-version-20220601105850-6708 containerd[511]: time="2022-06-01T11:11:07.902635003Z" level=warning msg="cleanup warnings time=\"2022-06-01T11:11:07Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3996 runtime=io.containerd.runc.v2\n"
	Jun 01 11:11:08 old-k8s-version-20220601105850-6708 containerd[511]: time="2022-06-01T11:11:08.565765811Z" level=info msg="RemoveContainer for \"d1efccf6d9e25e29664f8909e91d77c5ed7bdfc202c3a011aa009bb469f6588a\""
	Jun 01 11:11:08 old-k8s-version-20220601105850-6708 containerd[511]: time="2022-06-01T11:11:08.570873203Z" level=info msg="RemoveContainer for \"d1efccf6d9e25e29664f8909e91d77c5ed7bdfc202c3a011aa009bb469f6588a\" returns successfully"
	
	* 
	* ==> describe nodes <==
	* Name:               old-k8s-version-20220601105850-6708
	Roles:              master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-20220601105850-6708
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=4a356b2b7b41c6be3e1e342298908c27bb98ce92
	                    minikube.k8s.io/name=old-k8s-version-20220601105850-6708
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2022_06_01T10_59_29_0700
	                    minikube.k8s.io/version=v1.26.0-beta.1
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 01 Jun 2022 10:59:23 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 01 Jun 2022 11:11:23 +0000   Wed, 01 Jun 2022 10:59:20 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 01 Jun 2022 11:11:23 +0000   Wed, 01 Jun 2022 10:59:20 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 01 Jun 2022 11:11:23 +0000   Wed, 01 Jun 2022 10:59:20 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Wed, 01 Jun 2022 11:11:23 +0000   Wed, 01 Jun 2022 10:59:20 +0000   KubeletNotReady              runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Addresses:
	  InternalIP:  192.168.58.2
	  Hostname:    old-k8s-version-20220601105850-6708
	Capacity:
	 cpu:                8
	 ephemeral-storage:  304695084Ki
	 hugepages-1Gi:      0
	 hugepages-2Mi:      0
	 memory:             32873824Ki
	 pods:               110
	Allocatable:
	 cpu:                8
	 ephemeral-storage:  304695084Ki
	 hugepages-1Gi:      0
	 hugepages-2Mi:      0
	 memory:             32873824Ki
	 pods:               110
	System Info:
	 Machine ID:                 e0d7477b601740b2a7c32c13851e505c
	 System UUID:                cf752223-716a-46c7-b06a-74cba9af00dc
	 Boot ID:                    6ec46557-2d76-4d83-8353-3ee04fd961c4
	 Kernel Version:             5.13.0-1027-gcp
	 OS Image:                   Ubuntu 20.04.4 LTS
	 Operating System:           linux
	 Architecture:               amd64
	 Container Runtime Version:  containerd://1.6.4
	 Kubelet Version:            v1.16.0
	 Kube-Proxy Version:         v1.16.0
	PodCIDR:                     10.244.0.0/24
	PodCIDRs:                    10.244.0.0/24
	Non-terminated Pods:         (6 in total)
	  Namespace                  Name                                                           CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                  ----                                                           ------------  ----------  ---------------  -------------  ---
	  kube-system                etcd-old-k8s-version-20220601105850-6708                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                kindnet-rvdm8                                                  100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      12m
	  kube-system                kube-apiserver-old-k8s-version-20220601105850-6708             250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                kube-controller-manager-old-k8s-version-20220601105850-6708    200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                kube-proxy-9db28                                               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                kube-scheduler-old-k8s-version-20220601105850-6708             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                650m (8%!)(MISSING)  100m (1%!)(MISSING)
	  memory             50Mi (0%!)(MISSING)  50Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From                                             Message
	  ----    ------                   ----               ----                                             -------
	  Normal  NodeHasSufficientMemory  12m (x8 over 12m)  kubelet, old-k8s-version-20220601105850-6708     Node old-k8s-version-20220601105850-6708 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12m (x8 over 12m)  kubelet, old-k8s-version-20220601105850-6708     Node old-k8s-version-20220601105850-6708 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12m (x7 over 12m)  kubelet, old-k8s-version-20220601105850-6708     Node old-k8s-version-20220601105850-6708 status is now: NodeHasSufficientPID
	  Normal  Starting                 12m                kube-proxy, old-k8s-version-20220601105850-6708  Starting kube-proxy.
	
	* 
	* ==> dmesg <==
	* [Jun 1 10:57] IPv4: martian source 10.244.0.2 from 10.244.0.2, on dev bridge
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff ce 23 50 3b c9 fd 08 06
	[  +0.595787] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ce 23 50 3b c9 fd 08 06
	[  +0.586201] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 66 8a d2 ee 2a 9a 08 06
	[Jun 1 10:58] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 9a c7 83 3b 8c 1d 08 06
	[  +0.000302] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 66 8a d2 ee 2a 9a 08 06
	[ +12.725641] IPv4: martian source 10.244.0.2 from 10.244.0.2, on dev bridge
	[  +0.000013] ll header: 00000000: ff ff ff ff ff ff 3a 82 da d0 8c 8d 08 06
	[  +0.906380] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 3a 82 da d0 8c 8d 08 06
	[  +0.302806] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff aa d1 05 1a ef 66 08 06
	[ +13.606547] IPv4: martian source 10.85.0.2 from 10.85.0.2, on dev cni0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 7e ad f3 21 27 5b 08 06
	[  +8.626375] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff f6 c1 ef be 01 dd 08 06
	[  +0.000348] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff aa d1 05 1a ef 66 08 06
	[  +8.278909] IPv4: martian source 10.85.0.2 from 10.85.0.2, on dev cni0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 72 fb cd eb 09 11 08 06
	[Jun 1 11:01] IPv4: martian destination 127.0.0.11 from 10.244.0.2, dev vethb04b9442
	
	* 
	* ==> etcd [92f272874915c4877257c68e1d43539f7183cbef97f4b0837113afe72f1cdb3c] <==
	* 2022-06-01 10:59:19.557971 W | auth: simple token is not cryptographically signed
	2022-06-01 10:59:19.561258 I | etcdserver: starting server... [version: 3.3.15, cluster version: to_be_decided]
	2022-06-01 10:59:19.561609 I | etcdserver: b2c6679ac05f2cf1 as single-node; fast-forwarding 9 ticks (election ticks 10)
	2022-06-01 10:59:19.561830 I | etcdserver/membership: added member b2c6679ac05f2cf1 [https://192.168.58.2:2380] to cluster 3a56e4ca95e2355c
	2022-06-01 10:59:19.563596 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, ca = , trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	2022-06-01 10:59:19.563780 I | embed: listening for metrics on http://192.168.58.2:2381
	2022-06-01 10:59:19.563857 I | embed: listening for metrics on http://127.0.0.1:2381
	2022-06-01 10:59:20.398057 I | raft: b2c6679ac05f2cf1 is starting a new election at term 1
	2022-06-01 10:59:20.398087 I | raft: b2c6679ac05f2cf1 became candidate at term 2
	2022-06-01 10:59:20.398113 I | raft: b2c6679ac05f2cf1 received MsgVoteResp from b2c6679ac05f2cf1 at term 2
	2022-06-01 10:59:20.398122 I | raft: b2c6679ac05f2cf1 became leader at term 2
	2022-06-01 10:59:20.398127 I | raft: raft.node: b2c6679ac05f2cf1 elected leader b2c6679ac05f2cf1 at term 2
	2022-06-01 10:59:20.398431 I | etcdserver: published {Name:old-k8s-version-20220601105850-6708 ClientURLs:[https://192.168.58.2:2379]} to cluster 3a56e4ca95e2355c
	2022-06-01 10:59:20.398459 I | embed: ready to serve client requests
	2022-06-01 10:59:20.398511 I | embed: ready to serve client requests
	2022-06-01 10:59:20.398527 I | etcdserver: setting up the initial cluster version to 3.3
	2022-06-01 10:59:20.399286 N | etcdserver/membership: set the initial cluster version to 3.3
	2022-06-01 10:59:20.399361 I | etcdserver/api: enabled capabilities for version 3.3
	2022-06-01 10:59:20.400666 I | embed: serving client requests on 192.168.58.2:2379
	2022-06-01 10:59:20.401288 I | embed: serving client requests on 127.0.0.1:2379
	2022-06-01 11:00:27.079535 W | etcdserver: request "header:<ID:3238511576856218971 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/masterleases/192.168.58.2\" mod_revision:394 > success:<request_put:<key:\"/registry/masterleases/192.168.58.2\" value_size:67 lease:3238511576856218969 >> failure:<request_range:<key:\"/registry/masterleases/192.168.58.2\" > >>" with result "size:16" took too long (105.707876ms) to execute
	2022-06-01 11:00:27.370158 W | etcdserver: read-only range request "key:\"/registry/apiregistration.k8s.io/apiservices\" range_end:\"/registry/apiregistration.k8s.io/apiservicet\" count_only:true " with result "range_response_count:0 size:7" took too long (109.381517ms) to execute
	2022-06-01 11:07:02.455767 W | etcdserver: read-only range request "key:\"/registry/pods/default/\" range_end:\"/registry/pods/default0\" " with result "range_response_count:1 size:799" took too long (253.04121ms) to execute
	2022-06-01 11:09:20.420123 I | mvcc: store.index: compact 468
	2022-06-01 11:09:20.420844 I | mvcc: finished scheduled compaction at 468 (took 384.008µs)
	
	* 
	* ==> kernel <==
	*  11:11:50 up 54 min,  0 users,  load average: 0.86, 1.21, 1.61
	Linux old-k8s-version-20220601105850-6708 5.13.0-1027-gcp #32~20.04.1-Ubuntu SMP Thu May 26 10:53:08 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.4 LTS"
	
	* 
	* ==> kube-apiserver [0b9cf8973c8844f5d3f241696625e5764fbd79a0c0fa64202fca8a67567e726a] <==
	* I0601 10:59:23.500638       1 establishing_controller.go:73] Starting EstablishingController
	I0601 10:59:23.500713       1 nonstructuralschema_controller.go:191] Starting NonStructuralSchemaConditionController
	I0601 10:59:23.500739       1 apiapproval_controller.go:185] Starting KubernetesAPIApprovalPolicyConformantConditionController
	E0601 10:59:23.502269       1 controller.go:154] Unable to remove old endpoints from kubernetes service: StorageError: key not found, Code: 1, Key: /registry/masterleases/192.168.58.2, ResourceVersion: 0, AdditionalErrorMsg: 
	I0601 10:59:23.600240       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0601 10:59:23.600365       1 cache.go:39] Caches are synced for autoregister controller
	I0601 10:59:23.600658       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0601 10:59:23.653039       1 shared_informer.go:204] Caches are synced for crd-autoregister 
	I0601 10:59:24.500177       1 controller.go:107] OpenAPI AggregationController: Processing item 
	I0601 10:59:24.500198       1 controller.go:130] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I0601 10:59:24.500206       1 controller.go:130] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0601 10:59:24.504915       1 storage_scheduling.go:139] created PriorityClass system-node-critical with value 2000001000
	I0601 10:59:24.507571       1 storage_scheduling.go:139] created PriorityClass system-cluster-critical with value 2000000000
	I0601 10:59:24.507599       1 storage_scheduling.go:148] all system priority classes are created successfully or already exist.
	I0601 10:59:25.260704       1 controller.go:606] quota admission added evaluator for: leases.coordination.k8s.io
	I0601 10:59:26.281264       1 controller.go:606] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0601 10:59:26.561277       1 controller.go:606] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	W0601 10:59:26.876565       1 lease.go:222] Resetting endpoints for master service "kubernetes" to [192.168.58.2]
	I0601 10:59:26.877208       1 controller.go:606] quota admission added evaluator for: endpoints
	I0601 10:59:27.764458       1 controller.go:606] quota admission added evaluator for: serviceaccounts
	I0601 10:59:28.362361       1 controller.go:606] quota admission added evaluator for: deployments.apps
	I0601 10:59:28.727470       1 controller.go:606] quota admission added evaluator for: daemonsets.apps
	I0601 10:59:44.218023       1 controller.go:606] quota admission added evaluator for: replicasets.apps
	I0601 10:59:44.232173       1 controller.go:606] quota admission added evaluator for: events.events.k8s.io
	I0601 10:59:44.620734       1 controller.go:606] quota admission added evaluator for: controllerrevisions.apps
	
	* 
	* ==> kube-controller-manager [f18885873e44ef000cea8b73305d4b972b24f41b3a821ebf6ed2fbb3c400745d] <==
	* W0601 10:59:44.510205       1 actual_state_of_world.go:506] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="old-k8s-version-20220601105850-6708" does not exist
	I0601 10:59:44.513080       1 shared_informer.go:204] Caches are synced for attach detach 
	I0601 10:59:44.515251       1 shared_informer.go:204] Caches are synced for taint 
	I0601 10:59:44.515323       1 taint_manager.go:186] Starting NoExecuteTaintManager
	I0601 10:59:44.515326       1 node_lifecycle_controller.go:1208] Initializing eviction metric for zone: 
	W0601 10:59:44.515439       1 node_lifecycle_controller.go:903] Missing timestamp for Node old-k8s-version-20220601105850-6708. Assuming now as a timestamp.
	I0601 10:59:44.515423       1 event.go:255] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"old-k8s-version-20220601105850-6708", UID:"9a70fc40-abc0-4b88-bdf7-4c4dea7658d1", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RegisteredNode' Node old-k8s-version-20220601105850-6708 event: Registered Node old-k8s-version-20220601105850-6708 in Controller
	I0601 10:59:44.515473       1 node_lifecycle_controller.go:1058] Controller detected that all Nodes are not-Ready. Entering master disruption mode.
	I0601 10:59:44.562663       1 shared_informer.go:204] Caches are synced for persistent volume 
	I0601 10:59:44.562672       1 shared_informer.go:204] Caches are synced for stateful set 
	I0601 10:59:44.566561       1 shared_informer.go:204] Caches are synced for node 
	I0601 10:59:44.566584       1 range_allocator.go:172] Starting range CIDR allocator
	I0601 10:59:44.566598       1 shared_informer.go:197] Waiting for caches to sync for cidrallocator
	I0601 10:59:44.578102       1 shared_informer.go:204] Caches are synced for TTL 
	I0601 10:59:44.616316       1 shared_informer.go:204] Caches are synced for daemon sets 
	I0601 10:59:44.631230       1 event.go:255] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"kube-system", Name:"kube-proxy", UID:"46c63a7a-da9c-4b21-b27e-3ab2cc1bf42c", APIVersion:"apps/v1", ResourceVersion:"209", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kube-proxy-9db28
	I0601 10:59:44.633700       1 event.go:255] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"kube-system", Name:"kindnet", UID:"aee4ae9e-2298-4d10-81af-933537f4ccd9", APIVersion:"apps/v1", ResourceVersion:"223", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kindnet-rvdm8
	I0601 10:59:44.666231       1 shared_informer.go:204] Caches are synced for garbage collector 
	I0601 10:59:44.666319       1 garbagecollector.go:139] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0601 10:59:44.667908       1 shared_informer.go:204] Caches are synced for cidrallocator 
	I0601 10:59:44.674142       1 range_allocator.go:359] Set node old-k8s-version-20220601105850-6708 PodCIDR to [10.244.0.0/24]
	I0601 10:59:44.709226       1 shared_informer.go:204] Caches are synced for garbage collector 
	I0601 10:59:44.718073       1 shared_informer.go:204] Caches are synced for resource quota 
	I0601 10:59:45.806836       1 shared_informer.go:197] Waiting for caches to sync for resource quota
	I0601 10:59:45.907072       1 shared_informer.go:204] Caches are synced for resource quota 
	
	* 
	* ==> kube-proxy [01651d3598805140172b9f0f86349cd8cad0f336647501ce25f9120bcb1f7dc3] <==
	* W0601 10:59:45.684675       1 server_others.go:329] Flag proxy-mode="" unknown, assuming iptables proxy
	I0601 10:59:45.696662       1 node.go:135] Successfully retrieved node IP: 192.168.58.2
	I0601 10:59:45.696711       1 server_others.go:149] Using iptables Proxier.
	I0601 10:59:45.697092       1 server.go:529] Version: v1.16.0
	I0601 10:59:45.698531       1 config.go:313] Starting service config controller
	I0601 10:59:45.698559       1 shared_informer.go:197] Waiting for caches to sync for service config
	I0601 10:59:45.698582       1 config.go:131] Starting endpoints config controller
	I0601 10:59:45.698600       1 shared_informer.go:197] Waiting for caches to sync for endpoints config
	I0601 10:59:45.798783       1 shared_informer.go:204] Caches are synced for endpoints config 
	I0601 10:59:45.799058       1 shared_informer.go:204] Caches are synced for service config 
	
	* 
	* ==> kube-scheduler [e4d08ecd5adee34f6ccfaeb042d497cedc44597ee436ef3a30c0c98e725c3582] <==
	* I0601 10:59:23.568522       1 deprecated_insecure_serving.go:51] Serving healthz insecurely on [::]:10251
	I0601 10:59:23.569198       1 secure_serving.go:123] Serving securely on 127.0.0.1:10259
	E0601 10:59:23.658434       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0601 10:59:23.660485       1 reflector.go:123] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:236: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0601 10:59:23.661016       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0601 10:59:23.662119       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0601 10:59:23.665509       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0601 10:59:23.665685       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0601 10:59:23.665696       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0601 10:59:23.665786       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0601 10:59:23.665877       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0601 10:59:23.666262       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0601 10:59:23.667640       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0601 10:59:24.659538       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0601 10:59:24.661616       1 reflector.go:123] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:236: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0601 10:59:24.662868       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0601 10:59:24.664538       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0601 10:59:24.666434       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0601 10:59:24.667461       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0601 10:59:24.668599       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0601 10:59:24.669697       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0601 10:59:24.670863       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0601 10:59:24.672730       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0601 10:59:24.673763       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0601 10:59:45.971438       1 factory.go:585] pod is already present in the activeQ
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Wed 2022-06-01 10:59:01 UTC, end at Wed 2022-06-01 11:11:51 UTC. --
	Jun 01 11:10:08 old-k8s-version-20220601105850-6708 kubelet[914]: E0601 11:10:08.784763     914 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Jun 01 11:10:13 old-k8s-version-20220601105850-6708 kubelet[914]: E0601 11:10:13.785496     914 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Jun 01 11:10:18 old-k8s-version-20220601105850-6708 kubelet[914]: E0601 11:10:18.786232     914 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Jun 01 11:10:23 old-k8s-version-20220601105850-6708 kubelet[914]: E0601 11:10:23.786988     914 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Jun 01 11:10:28 old-k8s-version-20220601105850-6708 kubelet[914]: E0601 11:10:28.787783     914 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Jun 01 11:10:33 old-k8s-version-20220601105850-6708 kubelet[914]: E0601 11:10:33.788512     914 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Jun 01 11:10:38 old-k8s-version-20220601105850-6708 kubelet[914]: E0601 11:10:38.789326     914 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Jun 01 11:10:43 old-k8s-version-20220601105850-6708 kubelet[914]: E0601 11:10:43.790070     914 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Jun 01 11:10:48 old-k8s-version-20220601105850-6708 kubelet[914]: E0601 11:10:48.790925     914 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Jun 01 11:10:53 old-k8s-version-20220601105850-6708 kubelet[914]: E0601 11:10:53.791797     914 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Jun 01 11:10:58 old-k8s-version-20220601105850-6708 kubelet[914]: E0601 11:10:58.792624     914 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Jun 01 11:11:03 old-k8s-version-20220601105850-6708 kubelet[914]: E0601 11:11:03.793544     914 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Jun 01 11:11:08 old-k8s-version-20220601105850-6708 kubelet[914]: E0601 11:11:08.565904     914 pod_workers.go:191] Error syncing pod 0648d955-2d20-449d-88b9-57fb087825d8 ("kindnet-rvdm8_kube-system(0648d955-2d20-449d-88b9-57fb087825d8)"), skipping: failed to "StartContainer" for "kindnet-cni" with CrashLoopBackOff: "back-off 40s restarting failed container=kindnet-cni pod=kindnet-rvdm8_kube-system(0648d955-2d20-449d-88b9-57fb087825d8)"
	Jun 01 11:11:08 old-k8s-version-20220601105850-6708 kubelet[914]: E0601 11:11:08.794295     914 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Jun 01 11:11:13 old-k8s-version-20220601105850-6708 kubelet[914]: E0601 11:11:13.795086     914 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Jun 01 11:11:18 old-k8s-version-20220601105850-6708 kubelet[914]: E0601 11:11:18.795860     914 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Jun 01 11:11:20 old-k8s-version-20220601105850-6708 kubelet[914]: E0601 11:11:20.527035     914 pod_workers.go:191] Error syncing pod 0648d955-2d20-449d-88b9-57fb087825d8 ("kindnet-rvdm8_kube-system(0648d955-2d20-449d-88b9-57fb087825d8)"), skipping: failed to "StartContainer" for "kindnet-cni" with CrashLoopBackOff: "back-off 40s restarting failed container=kindnet-cni pod=kindnet-rvdm8_kube-system(0648d955-2d20-449d-88b9-57fb087825d8)"
	Jun 01 11:11:23 old-k8s-version-20220601105850-6708 kubelet[914]: E0601 11:11:23.796712     914 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Jun 01 11:11:28 old-k8s-version-20220601105850-6708 kubelet[914]: E0601 11:11:28.797454     914 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Jun 01 11:11:33 old-k8s-version-20220601105850-6708 kubelet[914]: E0601 11:11:33.526874     914 pod_workers.go:191] Error syncing pod 0648d955-2d20-449d-88b9-57fb087825d8 ("kindnet-rvdm8_kube-system(0648d955-2d20-449d-88b9-57fb087825d8)"), skipping: failed to "StartContainer" for "kindnet-cni" with CrashLoopBackOff: "back-off 40s restarting failed container=kindnet-cni pod=kindnet-rvdm8_kube-system(0648d955-2d20-449d-88b9-57fb087825d8)"
	Jun 01 11:11:33 old-k8s-version-20220601105850-6708 kubelet[914]: E0601 11:11:33.798326     914 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Jun 01 11:11:38 old-k8s-version-20220601105850-6708 kubelet[914]: E0601 11:11:38.799114     914 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Jun 01 11:11:43 old-k8s-version-20220601105850-6708 kubelet[914]: E0601 11:11:43.799970     914 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Jun 01 11:11:44 old-k8s-version-20220601105850-6708 kubelet[914]: E0601 11:11:44.526997     914 pod_workers.go:191] Error syncing pod 0648d955-2d20-449d-88b9-57fb087825d8 ("kindnet-rvdm8_kube-system(0648d955-2d20-449d-88b9-57fb087825d8)"), skipping: failed to "StartContainer" for "kindnet-cni" with CrashLoopBackOff: "back-off 40s restarting failed container=kindnet-cni pod=kindnet-rvdm8_kube-system(0648d955-2d20-449d-88b9-57fb087825d8)"
	Jun 01 11:11:48 old-k8s-version-20220601105850-6708 kubelet[914]: E0601 11:11:48.800849     914 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-20220601105850-6708 -n old-k8s-version-20220601105850-6708
helpers_test.go:261: (dbg) Run:  kubectl --context old-k8s-version-20220601105850-6708 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:270: non-running pods: busybox coredns-5644d7b6d9-5z28m storage-provisioner
helpers_test.go:272: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/DeployApp]: describe non-running pods <======
helpers_test.go:275: (dbg) Run:  kubectl --context old-k8s-version-20220601105850-6708 describe pod busybox coredns-5644d7b6d9-5z28m storage-provisioner
helpers_test.go:275: (dbg) Non-zero exit: kubectl --context old-k8s-version-20220601105850-6708 describe pod busybox coredns-5644d7b6d9-5z28m storage-provisioner: exit status 1 (59.574808ms)

                                                
                                                
-- stdout --
	Name:         busybox
	Namespace:    default
	Priority:     0
	Node:         <none>
	Labels:       integration-test=busybox
	Annotations:  <none>
	Status:       Pending
	IP:           
	IPs:          <none>
	Containers:
	  busybox:
	    Image:      gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sleep
	      3600
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from default-token-vdddm (ro)
	Conditions:
	  Type           Status
	  PodScheduled   False 
	Volumes:
	  default-token-vdddm:
	    Type:        Secret (a volume populated by a Secret)
	    SecretName:  default-token-vdddm
	    Optional:    false
	QoS Class:       BestEffort
	Node-Selectors:  <none>
	Tolerations:     node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                 node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason            Age                    From               Message
	  ----     ------            ----                   ----               -------
	  Warning  FailedScheduling  8m4s                   default-scheduler  0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.
	  Warning  FailedScheduling  5m27s (x1 over 6m57s)  default-scheduler  0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "coredns-5644d7b6d9-5z28m" not found
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:277: kubectl --context old-k8s-version-20220601105850-6708 describe pod busybox coredns-5644d7b6d9-5z28m storage-provisioner: exit status 1
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (484.54s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/FirstStart (282.63s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/FirstStart
start_stop_delete_test.go:188: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-different-port-20220601110654-6708 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.23.6
E0601 11:07:05.879244    6708 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/cilium-20220601104839-6708/client.crt: no such file or directory
E0601 11:07:12.928771    6708 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/addons-20220601102024-6708/client.crt: no such file or directory
E0601 11:07:21.870924    6708 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/functional-20220601102456-6708/client.crt: no such file or directory
E0601 11:07:34.133276    6708 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/ingress-addon-legacy-20220601102806-6708/client.crt: no such file or directory
E0601 11:07:54.651901    6708 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/enable-default-cni-20220601104837-6708/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/FirstStart
start_stop_delete_test.go:188: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p default-k8s-different-port-20220601110654-6708 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.23.6: exit status 80 (4m40.620098658s)

                                                
                                                
-- stdout --
	* [default-k8s-different-port-20220601110654-6708] minikube v1.26.0-beta.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=14079
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	* Using the docker driver based on user configuration
	* Using Docker driver with the root privilege
	* Starting control plane node default-k8s-different-port-20220601110654-6708 in cluster default-k8s-different-port-20220601110654-6708
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	* Preparing Kubernetes v1.23.6 on containerd 1.6.4 ...
	  - kubelet.cni-conf-dir=/etc/cni/net.mk
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner, default-storageclass
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0601 11:06:54.667302  244383 out.go:296] Setting OutFile to fd 1 ...
	I0601 11:06:54.667430  244383 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0601 11:06:54.667448  244383 out.go:309] Setting ErrFile to fd 2...
	I0601 11:06:54.667455  244383 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0601 11:06:54.667611  244383 root.go:322] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/bin
	I0601 11:06:54.668037  244383 out.go:303] Setting JSON to false
	I0601 11:06:54.669846  244383 start.go:115] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":2969,"bootTime":1654078646,"procs":645,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.13.0-1027-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0601 11:06:54.669914  244383 start.go:125] virtualization: kvm guest
	I0601 11:06:54.672039  244383 out.go:177] * [default-k8s-different-port-20220601110654-6708] minikube v1.26.0-beta.1 on Ubuntu 20.04 (kvm/amd64)
	I0601 11:06:54.673519  244383 out.go:177]   - MINIKUBE_LOCATION=14079
	I0601 11:06:54.673532  244383 notify.go:193] Checking for updates...
	I0601 11:06:54.676498  244383 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0601 11:06:54.678066  244383 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig
	I0601 11:06:54.679578  244383 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube
	I0601 11:06:54.681049  244383 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0601 11:06:54.682891  244383 config.go:178] Loaded profile config "calico-20220601104839-6708": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.6
	I0601 11:06:54.683008  244383 config.go:178] Loaded profile config "embed-certs-20220601110327-6708": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.6
	I0601 11:06:54.683105  244383 config.go:178] Loaded profile config "old-k8s-version-20220601105850-6708": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.16.0
	I0601 11:06:54.683158  244383 driver.go:358] Setting default libvirt URI to qemu:///system
	I0601 11:06:54.724298  244383 docker.go:137] docker version: linux-20.10.16
	I0601 11:06:54.724374  244383 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0601 11:06:54.826819  244383 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:76 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:48 OomKillDisable:true NGoroutines:49 SystemTime:2022-06-01 11:06:54.7540349 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1027-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexServe
rAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662795776 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:20.10.16 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] ClientI
nfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0601 11:06:54.826932  244383 docker.go:254] overlay module found
	I0601 11:06:54.829003  244383 out.go:177] * Using the docker driver based on user configuration
	I0601 11:06:54.830315  244383 start.go:284] selected driver: docker
	I0601 11:06:54.830327  244383 start.go:806] validating driver "docker" against <nil>
	I0601 11:06:54.830352  244383 start.go:817] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0601 11:06:54.831265  244383 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0601 11:06:54.931062  244383 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:76 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:48 OomKillDisable:true NGoroutines:49 SystemTime:2022-06-01 11:06:54.859997014 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1027-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662795776 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:20.10.16 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0601 11:06:54.931188  244383 start_flags.go:292] no existing cluster config was found, will generate one from the flags 
	I0601 11:06:54.931414  244383 start_flags.go:847] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0601 11:06:54.933788  244383 out.go:177] * Using Docker driver with the root privilege
	I0601 11:06:54.935205  244383 cni.go:95] Creating CNI manager for ""
	I0601 11:06:54.935218  244383 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0601 11:06:54.935233  244383 cni.go:225] auto-setting extra-config to "kubelet.cni-conf-dir=/etc/cni/net.mk"
	I0601 11:06:54.935238  244383 cni.go:230] extra-config set to "kubelet.cni-conf-dir=/etc/cni/net.mk"
	I0601 11:06:54.935243  244383 start_flags.go:301] Found "CNI" CNI - setting NetworkPlugin=cni
	I0601 11:06:54.935250  244383 start_flags.go:306] config:
	{Name:default-k8s-different-port-20220601110654-6708 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:default-k8s-different-port-20220601110654-6708 Namespace:default APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0601 11:06:54.936846  244383 out.go:177] * Starting control plane node default-k8s-different-port-20220601110654-6708 in cluster default-k8s-different-port-20220601110654-6708
	I0601 11:06:54.938038  244383 cache.go:120] Beginning downloading kic base image for docker with containerd
	I0601 11:06:54.939519  244383 out.go:177] * Pulling base image ...
	I0601 11:06:54.940856  244383 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime containerd
	I0601 11:06:54.940881  244383 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a in local docker daemon
	I0601 11:06:54.940905  244383 preload.go:148] Found local preload: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.6-containerd-overlay2-amd64.tar.lz4
	I0601 11:06:54.940928  244383 cache.go:57] Caching tarball of preloaded images
	I0601 11:06:54.941154  244383 preload.go:174] Found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.6-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0601 11:06:54.941186  244383 cache.go:60] Finished verifying existence of preloaded tar for  v1.23.6 on containerd
	I0601 11:06:54.941308  244383 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/default-k8s-different-port-20220601110654-6708/config.json ...
	I0601 11:06:54.941333  244383 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/default-k8s-different-port-20220601110654-6708/config.json: {Name:mk8b3d87cba3844f82b835b906c4fc7fcf103163 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0601 11:06:54.986323  244383 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a in local docker daemon, skipping pull
	I0601 11:06:54.986351  244383 cache.go:141] gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a exists in daemon, skipping load
	I0601 11:06:54.986370  244383 cache.go:206] Successfully downloaded all kic artifacts
	I0601 11:06:54.986406  244383 start.go:352] acquiring machines lock for default-k8s-different-port-20220601110654-6708: {Name:mk7500f636009412c286b3a5b3a2182fb6b229b2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0601 11:06:54.986553  244383 start.go:356] acquired machines lock for "default-k8s-different-port-20220601110654-6708" in 123.17µs
	I0601 11:06:54.986588  244383 start.go:91] Provisioning new machine with config: &{Name:default-k8s-different-port-20220601110654-6708 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:default-k8s-different-por
t-20220601110654-6708 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.23.6 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:do
cker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false} &{Name: IP: Port:8444 KubernetesVersion:v1.23.6 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0601 11:06:54.986696  244383 start.go:131] createHost starting for "" (driver="docker")
	I0601 11:06:54.989283  244383 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0601 11:06:54.989495  244383 start.go:165] libmachine.API.Create for "default-k8s-different-port-20220601110654-6708" (driver="docker")
	I0601 11:06:54.989523  244383 client.go:168] LocalClient.Create starting
	I0601 11:06:54.989576  244383 main.go:134] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca.pem
	I0601 11:06:54.989602  244383 main.go:134] libmachine: Decoding PEM data...
	I0601 11:06:54.989620  244383 main.go:134] libmachine: Parsing certificate...
	I0601 11:06:54.989670  244383 main.go:134] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/cert.pem
	I0601 11:06:54.989686  244383 main.go:134] libmachine: Decoding PEM data...
	I0601 11:06:54.989697  244383 main.go:134] libmachine: Parsing certificate...
	I0601 11:06:54.990003  244383 cli_runner.go:164] Run: docker network inspect default-k8s-different-port-20220601110654-6708 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0601 11:06:55.021531  244383 cli_runner.go:211] docker network inspect default-k8s-different-port-20220601110654-6708 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0601 11:06:55.021592  244383 network_create.go:272] running [docker network inspect default-k8s-different-port-20220601110654-6708] to gather additional debugging logs...
	I0601 11:06:55.021618  244383 cli_runner.go:164] Run: docker network inspect default-k8s-different-port-20220601110654-6708
	W0601 11:06:55.051948  244383 cli_runner.go:211] docker network inspect default-k8s-different-port-20220601110654-6708 returned with exit code 1
	I0601 11:06:55.051984  244383 network_create.go:275] error running [docker network inspect default-k8s-different-port-20220601110654-6708]: docker network inspect default-k8s-different-port-20220601110654-6708: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: default-k8s-different-port-20220601110654-6708
	I0601 11:06:55.052003  244383 network_create.go:277] output of [docker network inspect default-k8s-different-port-20220601110654-6708]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: default-k8s-different-port-20220601110654-6708
	
	** /stderr **
	I0601 11:06:55.052049  244383 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0601 11:06:55.083654  244383 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc0001322e0] misses:0}
	I0601 11:06:55.083702  244383 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0601 11:06:55.083718  244383 network_create.go:115] attempt to create docker network default-k8s-different-port-20220601110654-6708 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0601 11:06:55.083760  244383 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true default-k8s-different-port-20220601110654-6708
	I0601 11:06:55.150185  244383 network_create.go:99] docker network default-k8s-different-port-20220601110654-6708 192.168.49.0/24 created
	I0601 11:06:55.150232  244383 kic.go:106] calculated static IP "192.168.49.2" for the "default-k8s-different-port-20220601110654-6708" container
	I0601 11:06:55.150301  244383 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0601 11:06:55.185029  244383 cli_runner.go:164] Run: docker volume create default-k8s-different-port-20220601110654-6708 --label name.minikube.sigs.k8s.io=default-k8s-different-port-20220601110654-6708 --label created_by.minikube.sigs.k8s.io=true
	I0601 11:06:55.218896  244383 oci.go:103] Successfully created a docker volume default-k8s-different-port-20220601110654-6708
	I0601 11:06:55.218982  244383 cli_runner.go:164] Run: docker run --rm --name default-k8s-different-port-20220601110654-6708-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-different-port-20220601110654-6708 --entrypoint /usr/bin/test -v default-k8s-different-port-20220601110654-6708:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a -d /var/lib
	I0601 11:06:55.773802  244383 oci.go:107] Successfully prepared a docker volume default-k8s-different-port-20220601110654-6708
	I0601 11:06:55.773849  244383 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime containerd
	I0601 11:06:55.773871  244383 kic.go:179] Starting extracting preloaded images to volume ...
	I0601 11:06:55.773932  244383 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.6-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v default-k8s-different-port-20220601110654-6708:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a -I lz4 -xf /preloaded.tar -C /extractDir
	I0601 11:07:03.152484  244383 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.6-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v default-k8s-different-port-20220601110654-6708:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a -I lz4 -xf /preloaded.tar -C /extractDir: (7.378487132s)
	I0601 11:07:03.152523  244383 kic.go:188] duration metric: took 7.378645 seconds to extract preloaded images to volume
	W0601 11:07:03.152655  244383 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0601 11:07:03.152754  244383 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0601 11:07:03.258344  244383 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname default-k8s-different-port-20220601110654-6708 --name default-k8s-different-port-20220601110654-6708 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-different-port-20220601110654-6708 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=default-k8s-different-port-20220601110654-6708 --network default-k8s-different-port-20220601110654-6708 --ip 192.168.49.2 --volume default-k8s-different-port-20220601110654-6708:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8444 --publish=127.0.0.1::8444 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b3
2c52a
	I0601 11:07:03.640637  244383 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220601110654-6708 --format={{.State.Running}}
	I0601 11:07:03.675247  244383 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220601110654-6708 --format={{.State.Status}}
	I0601 11:07:03.707758  244383 cli_runner.go:164] Run: docker exec default-k8s-different-port-20220601110654-6708 stat /var/lib/dpkg/alternatives/iptables
	I0601 11:07:03.767985  244383 oci.go:247] the created container "default-k8s-different-port-20220601110654-6708" has a running status.
	I0601 11:07:03.768013  244383 kic.go:210] Creating ssh key for kic: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/default-k8s-different-port-20220601110654-6708/id_rsa...
	I0601 11:07:03.823786  244383 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/default-k8s-different-port-20220601110654-6708/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0601 11:07:03.917787  244383 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220601110654-6708 --format={{.State.Status}}
	I0601 11:07:03.956706  244383 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0601 11:07:03.956735  244383 kic_runner.go:114] Args: [docker exec --privileged default-k8s-different-port-20220601110654-6708 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0601 11:07:04.044516  244383 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220601110654-6708 --format={{.State.Status}}
	I0601 11:07:04.081442  244383 machine.go:88] provisioning docker machine ...
	I0601 11:07:04.081477  244383 ubuntu.go:169] provisioning hostname "default-k8s-different-port-20220601110654-6708"
	I0601 11:07:04.081535  244383 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220601110654-6708
	I0601 11:07:04.119200  244383 main.go:134] libmachine: Using SSH client type: native
	I0601 11:07:04.119405  244383 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7da240] 0x7dd2a0 <nil>  [] 0s} 127.0.0.1 49417 <nil> <nil>}
	I0601 11:07:04.119425  244383 main.go:134] libmachine: About to run SSH command:
	sudo hostname default-k8s-different-port-20220601110654-6708 && echo "default-k8s-different-port-20220601110654-6708" | sudo tee /etc/hostname
	I0601 11:07:04.249668  244383 main.go:134] libmachine: SSH cmd err, output: <nil>: default-k8s-different-port-20220601110654-6708
	
	I0601 11:07:04.249734  244383 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220601110654-6708
	I0601 11:07:04.283443  244383 main.go:134] libmachine: Using SSH client type: native
	I0601 11:07:04.283593  244383 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7da240] 0x7dd2a0 <nil>  [] 0s} 127.0.0.1 49417 <nil> <nil>}
	I0601 11:07:04.283628  244383 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-different-port-20220601110654-6708' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-different-port-20220601110654-6708/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-different-port-20220601110654-6708' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0601 11:07:04.395587  244383 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0601 11:07:04.395617  244383 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/c
erts/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube}
	I0601 11:07:04.395643  244383 ubuntu.go:177] setting up certificates
	I0601 11:07:04.395652  244383 provision.go:83] configureAuth start
	I0601 11:07:04.395697  244383 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-different-port-20220601110654-6708
	I0601 11:07:04.427413  244383 provision.go:138] copyHostCerts
	I0601 11:07:04.427469  244383 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/key.pem, removing ...
	I0601 11:07:04.427481  244383 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/key.pem
	I0601 11:07:04.427543  244383 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/key.pem (1679 bytes)
	I0601 11:07:04.427622  244383 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.pem, removing ...
	I0601 11:07:04.427632  244383 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.pem
	I0601 11:07:04.427659  244383 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.pem (1082 bytes)
	I0601 11:07:04.427708  244383 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cert.pem, removing ...
	I0601 11:07:04.427721  244383 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cert.pem
	I0601 11:07:04.427753  244383 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cert.pem (1123 bytes)
	I0601 11:07:04.427802  244383 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca-key.pem org=jenkins.default-k8s-different-port-20220601110654-6708 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube default-k8s-different-port-20220601110654-6708]
	I0601 11:07:04.535631  244383 provision.go:172] copyRemoteCerts
	I0601 11:07:04.535685  244383 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0601 11:07:04.535726  244383 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220601110654-6708
	I0601 11:07:04.568780  244383 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49417 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/default-k8s-different-port-20220601110654-6708/id_rsa Username:docker}
	I0601 11:07:04.659152  244383 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0601 11:07:04.676610  244383 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/server.pem --> /etc/docker/server.pem (1306 bytes)
	I0601 11:07:04.694731  244383 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0601 11:07:04.711549  244383 provision.go:86] duration metric: configureAuth took 315.887909ms
	I0601 11:07:04.711573  244383 ubuntu.go:193] setting minikube options for container-runtime
	I0601 11:07:04.711735  244383 config.go:178] Loaded profile config "default-k8s-different-port-20220601110654-6708": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.6
	I0601 11:07:04.711748  244383 machine.go:91] provisioned docker machine in 630.288068ms
	I0601 11:07:04.711754  244383 client.go:171] LocalClient.Create took 9.722222745s
	I0601 11:07:04.711778  244383 start.go:173] duration metric: libmachine.API.Create for "default-k8s-different-port-20220601110654-6708" took 9.722275215s
	I0601 11:07:04.711793  244383 start.go:306] post-start starting for "default-k8s-different-port-20220601110654-6708" (driver="docker")
	I0601 11:07:04.711800  244383 start.go:316] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0601 11:07:04.711844  244383 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0601 11:07:04.711903  244383 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220601110654-6708
	I0601 11:07:04.745536  244383 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49417 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/default-k8s-different-port-20220601110654-6708/id_rsa Username:docker}
	I0601 11:07:04.831037  244383 ssh_runner.go:195] Run: cat /etc/os-release
	I0601 11:07:04.833655  244383 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0601 11:07:04.833679  244383 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0601 11:07:04.833703  244383 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0601 11:07:04.833716  244383 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0601 11:07:04.833726  244383 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/addons for local assets ...
	I0601 11:07:04.833775  244383 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files for local assets ...
	I0601 11:07:04.833870  244383 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files/etc/ssl/certs/67082.pem -> 67082.pem in /etc/ssl/certs
	I0601 11:07:04.833975  244383 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0601 11:07:04.840420  244383 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files/etc/ssl/certs/67082.pem --> /etc/ssl/certs/67082.pem (1708 bytes)
	I0601 11:07:04.857187  244383 start.go:309] post-start completed in 145.384397ms
	I0601 11:07:04.857493  244383 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-different-port-20220601110654-6708
	I0601 11:07:04.888747  244383 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/default-k8s-different-port-20220601110654-6708/config.json ...
	I0601 11:07:04.888963  244383 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0601 11:07:04.889000  244383 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220601110654-6708
	I0601 11:07:04.919352  244383 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49417 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/default-k8s-different-port-20220601110654-6708/id_rsa Username:docker}
	I0601 11:07:05.000243  244383 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0601 11:07:05.004050  244383 start.go:134] duration metric: createHost completed in 10.017341223s
	I0601 11:07:05.004075  244383 start.go:81] releasing machines lock for "default-k8s-different-port-20220601110654-6708", held for 10.017502791s
	I0601 11:07:05.004171  244383 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-different-port-20220601110654-6708
	I0601 11:07:05.035905  244383 ssh_runner.go:195] Run: systemctl --version
	I0601 11:07:05.035960  244383 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220601110654-6708
	I0601 11:07:05.035972  244383 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0601 11:07:05.036031  244383 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220601110654-6708
	I0601 11:07:05.069327  244383 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49417 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/default-k8s-different-port-20220601110654-6708/id_rsa Username:docker}
	I0601 11:07:05.070632  244383 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49417 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/default-k8s-different-port-20220601110654-6708/id_rsa Username:docker}
	I0601 11:07:05.175990  244383 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0601 11:07:05.186279  244383 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0601 11:07:05.194913  244383 docker.go:187] disabling docker service ...
	I0601 11:07:05.194953  244383 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0601 11:07:05.211132  244383 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0601 11:07:05.219763  244383 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0601 11:07:05.302855  244383 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0601 11:07:05.379942  244383 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0601 11:07:05.388684  244383 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0601 11:07:05.401125  244383 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*sandbox_image = .*$|sandbox_image = "k8s.gcr.io/pause:3.6"|' -i /etc/containerd/config.toml"
	I0601 11:07:05.408798  244383 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*restrict_oom_score_adj = .*$|restrict_oom_score_adj = false|' -i /etc/containerd/config.toml"
	I0601 11:07:05.416626  244383 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*SystemdCgroup = .*$|SystemdCgroup = false|' -i /etc/containerd/config.toml"
	I0601 11:07:05.424218  244383 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*conf_dir = .*$|conf_dir = "/etc/cni/net.mk"|' -i /etc/containerd/config.toml"
	I0601 11:07:05.431786  244383 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^# imports|imports = ["/etc/containerd/containerd.conf.d/02-containerd.conf"]|' -i /etc/containerd/config.toml"
	I0601 11:07:05.439234  244383 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc/containerd/containerd.conf.d && printf %s "dmVyc2lvbiA9IDIK" | base64 -d | sudo tee /etc/containerd/containerd.conf.d/02-containerd.conf"
	I0601 11:07:05.451481  244383 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0601 11:07:05.457796  244383 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0601 11:07:05.464201  244383 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0601 11:07:05.540478  244383 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0601 11:07:05.650499  244383 start.go:447] Will wait 60s for socket path /run/containerd/containerd.sock
	I0601 11:07:05.650567  244383 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0601 11:07:05.654052  244383 start.go:468] Will wait 60s for crictl version
	I0601 11:07:05.654103  244383 ssh_runner.go:195] Run: sudo crictl version
	I0601 11:07:05.681128  244383 start.go:477] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.6.4
	RuntimeApiVersion:  v1alpha2
	I0601 11:07:05.681188  244383 ssh_runner.go:195] Run: containerd --version
	I0601 11:07:05.710828  244383 ssh_runner.go:195] Run: containerd --version
	I0601 11:07:05.741779  244383 out.go:177] * Preparing Kubernetes v1.23.6 on containerd 1.6.4 ...
	I0601 11:07:05.743207  244383 cli_runner.go:164] Run: docker network inspect default-k8s-different-port-20220601110654-6708 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0601 11:07:05.773719  244383 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0601 11:07:05.777293  244383 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0601 11:07:05.788623  244383 out.go:177]   - kubelet.cni-conf-dir=/etc/cni/net.mk
	I0601 11:07:05.790049  244383 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime containerd
	I0601 11:07:05.790117  244383 ssh_runner.go:195] Run: sudo crictl images --output json
	I0601 11:07:05.812809  244383 containerd.go:547] all images are preloaded for containerd runtime.
	I0601 11:07:05.812831  244383 containerd.go:461] Images already preloaded, skipping extraction
	I0601 11:07:05.812869  244383 ssh_runner.go:195] Run: sudo crictl images --output json
	I0601 11:07:05.834860  244383 containerd.go:547] all images are preloaded for containerd runtime.
	I0601 11:07:05.834879  244383 cache_images.go:84] Images are preloaded, skipping loading
	I0601 11:07:05.834947  244383 ssh_runner.go:195] Run: sudo crictl info
	I0601 11:07:05.857173  244383 cni.go:95] Creating CNI manager for ""
	I0601 11:07:05.857192  244383 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0601 11:07:05.857218  244383 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0601 11:07:05.857235  244383 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8444 KubernetesVersion:v1.23.6 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-different-port-20220601110654-6708 NodeName:default-k8s-different-port-20220601110654-6708 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.49.2
CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0601 11:07:05.857383  244383 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "default-k8s-different-port-20220601110654-6708"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.23.6
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0601 11:07:05.857471  244383 kubeadm.go:961] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.23.6/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cni-conf-dir=/etc/cni/net.mk --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=default-k8s-different-port-20220601110654-6708 --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.49.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.23.6 ClusterName:default-k8s-different-port-20220601110654-6708 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:}
	I0601 11:07:05.857530  244383 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.23.6
	I0601 11:07:05.864412  244383 binaries.go:44] Found k8s binaries, skipping transfer
	I0601 11:07:05.864485  244383 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0601 11:07:05.870921  244383 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (591 bytes)
	I0601 11:07:05.883133  244383 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0601 11:07:05.896240  244383 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2075 bytes)
	I0601 11:07:05.908996  244383 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0601 11:07:05.911816  244383 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0601 11:07:05.920740  244383 certs.go:54] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/default-k8s-different-port-20220601110654-6708 for IP: 192.168.49.2
	I0601 11:07:05.920863  244383 certs.go:182] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.key
	I0601 11:07:05.920906  244383 certs.go:182] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/proxy-client-ca.key
	I0601 11:07:05.920964  244383 certs.go:302] generating minikube-user signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/default-k8s-different-port-20220601110654-6708/client.key
	I0601 11:07:05.920984  244383 crypto.go:68] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/default-k8s-different-port-20220601110654-6708/client.crt with IP's: []
	I0601 11:07:06.190511  244383 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/default-k8s-different-port-20220601110654-6708/client.crt ...
	I0601 11:07:06.190541  244383 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/default-k8s-different-port-20220601110654-6708/client.crt: {Name:mk1f0de9f338c1565864d345295f211cd6b42042 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0601 11:07:06.190751  244383 crypto.go:164] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/default-k8s-different-port-20220601110654-6708/client.key ...
	I0601 11:07:06.190766  244383 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/default-k8s-different-port-20220601110654-6708/client.key: {Name:mk3abd1ec1bc2a3303283efb1d56bffeb558d491 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0601 11:07:06.190855  244383 certs.go:302] generating minikube signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/default-k8s-different-port-20220601110654-6708/apiserver.key.dd3b5fb2
	I0601 11:07:06.190870  244383 crypto.go:68] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/default-k8s-different-port-20220601110654-6708/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0601 11:07:06.411949  244383 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/default-k8s-different-port-20220601110654-6708/apiserver.crt.dd3b5fb2 ...
	I0601 11:07:06.411982  244383 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/default-k8s-different-port-20220601110654-6708/apiserver.crt.dd3b5fb2: {Name:mk21c89d2fdd1fdc207dd136def37f5d90a62bd7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0601 11:07:06.412202  244383 crypto.go:164] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/default-k8s-different-port-20220601110654-6708/apiserver.key.dd3b5fb2 ...
	I0601 11:07:06.412221  244383 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/default-k8s-different-port-20220601110654-6708/apiserver.key.dd3b5fb2: {Name:mk2f4aae6eb49e6251c3e6c8e6f0f6462f382896 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0601 11:07:06.412314  244383 certs.go:320] copying /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/default-k8s-different-port-20220601110654-6708/apiserver.crt.dd3b5fb2 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/default-k8s-different-port-20220601110654-6708/apiserver.crt
	I0601 11:07:06.412369  244383 certs.go:324] copying /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/default-k8s-different-port-20220601110654-6708/apiserver.key.dd3b5fb2 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/default-k8s-different-port-20220601110654-6708/apiserver.key
	I0601 11:07:06.412451  244383 certs.go:302] generating aggregator signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/default-k8s-different-port-20220601110654-6708/proxy-client.key
	I0601 11:07:06.412469  244383 crypto.go:68] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/default-k8s-different-port-20220601110654-6708/proxy-client.crt with IP's: []
	I0601 11:07:06.545552  244383 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/default-k8s-different-port-20220601110654-6708/proxy-client.crt ...
	I0601 11:07:06.545619  244383 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/default-k8s-different-port-20220601110654-6708/proxy-client.crt: {Name:mkee564e3149cd8be755ca3cbe99f47feac8e4a7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0601 11:07:06.545807  244383 crypto.go:164] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/default-k8s-different-port-20220601110654-6708/proxy-client.key ...
	I0601 11:07:06.545819  244383 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/default-k8s-different-port-20220601110654-6708/proxy-client.key: {Name:mk3354416a46b334b24512eafd987800637af3d8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0601 11:07:06.547104  244383 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/6708.pem (1338 bytes)
	W0601 11:07:06.547148  244383 certs.go:384] ignoring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/6708_empty.pem, impossibly tiny 0 bytes
	I0601 11:07:06.547174  244383 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca-key.pem (1679 bytes)
	I0601 11:07:06.547194  244383 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca.pem (1082 bytes)
	I0601 11:07:06.547234  244383 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/cert.pem (1123 bytes)
	I0601 11:07:06.547271  244383 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/key.pem (1679 bytes)
	I0601 11:07:06.547327  244383 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files/etc/ssl/certs/67082.pem (1708 bytes)
	I0601 11:07:06.547961  244383 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/default-k8s-different-port-20220601110654-6708/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0601 11:07:06.565921  244383 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/default-k8s-different-port-20220601110654-6708/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0601 11:07:06.584089  244383 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/default-k8s-different-port-20220601110654-6708/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0601 11:07:06.601191  244383 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/default-k8s-different-port-20220601110654-6708/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0601 11:07:06.618465  244383 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0601 11:07:06.635815  244383 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0601 11:07:06.653212  244383 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0601 11:07:06.670886  244383 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0601 11:07:06.687801  244383 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/6708.pem --> /usr/share/ca-certificates/6708.pem (1338 bytes)
	I0601 11:07:06.704953  244383 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files/etc/ssl/certs/67082.pem --> /usr/share/ca-certificates/67082.pem (1708 bytes)
	I0601 11:07:06.721444  244383 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0601 11:07:06.737875  244383 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0601 11:07:06.751738  244383 ssh_runner.go:195] Run: openssl version
	I0601 11:07:06.756719  244383 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/67082.pem && ln -fs /usr/share/ca-certificates/67082.pem /etc/ssl/certs/67082.pem"
	I0601 11:07:06.764146  244383 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/67082.pem
	I0601 11:07:06.767163  244383 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Jun  1 10:24 /usr/share/ca-certificates/67082.pem
	I0601 11:07:06.767216  244383 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/67082.pem
	I0601 11:07:06.771914  244383 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/67082.pem /etc/ssl/certs/3ec20f2e.0"
	I0601 11:07:06.778934  244383 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0601 11:07:06.786568  244383 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0601 11:07:06.789545  244383 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Jun  1 10:20 /usr/share/ca-certificates/minikubeCA.pem
	I0601 11:07:06.789607  244383 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0601 11:07:06.794248  244383 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0601 11:07:06.801364  244383 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6708.pem && ln -fs /usr/share/ca-certificates/6708.pem /etc/ssl/certs/6708.pem"
	I0601 11:07:06.808247  244383 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6708.pem
	I0601 11:07:06.811196  244383 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Jun  1 10:24 /usr/share/ca-certificates/6708.pem
	I0601 11:07:06.811252  244383 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6708.pem
	I0601 11:07:06.816241  244383 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6708.pem /etc/ssl/certs/51391683.0"
	I0601 11:07:06.823684  244383 kubeadm.go:395] StartCluster: {Name:default-k8s-different-port-20220601110654-6708 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:default-k8s-different-port-20220601110654-6708
Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8444 KubernetesVersion:v1.23.6 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mount
IP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0601 11:07:06.823768  244383 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0601 11:07:06.823809  244383 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0601 11:07:06.847418  244383 cri.go:87] found id: ""
	I0601 11:07:06.847481  244383 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0601 11:07:06.854612  244383 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0601 11:07:06.861596  244383 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0601 11:07:06.861652  244383 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0601 11:07:06.868516  244383 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0601 11:07:06.868568  244383 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0601 11:07:07.121183  244383 out.go:204]   - Generating certificates and keys ...
	I0601 11:07:09.218861  244383 out.go:204]   - Booting up control plane ...
	I0601 11:07:21.259795  244383 out.go:204]   - Configuring RBAC rules ...
	I0601 11:07:21.672636  244383 cni.go:95] Creating CNI manager for ""
	I0601 11:07:21.672654  244383 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0601 11:07:21.674533  244383 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0601 11:07:21.675845  244383 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0601 11:07:21.679515  244383 cni.go:189] applying CNI manifest using /var/lib/minikube/binaries/v1.23.6/kubectl ...
	I0601 11:07:21.679534  244383 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0601 11:07:21.692464  244383 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0601 11:07:22.465311  244383 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0601 11:07:22.465382  244383 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:07:22.465395  244383 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl label nodes minikube.k8s.io/version=v1.26.0-beta.1 minikube.k8s.io/commit=4a356b2b7b41c6be3e1e342298908c27bb98ce92 minikube.k8s.io/name=default-k8s-different-port-20220601110654-6708 minikube.k8s.io/updated_at=2022_06_01T11_07_22_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:07:22.521244  244383 ops.go:34] apiserver oom_adj: -16
	I0601 11:07:22.521263  244383 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:07:23.109047  244383 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:07:23.609743  244383 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:07:24.109036  244383 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:07:24.609779  244383 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:07:25.109823  244383 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:07:25.609061  244383 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:07:26.108863  244383 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:07:26.608780  244383 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:07:27.109061  244383 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:07:27.609116  244383 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:07:28.109699  244383 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:07:28.609047  244383 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:07:29.109170  244383 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:07:29.608851  244383 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:07:30.109055  244383 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:07:30.608852  244383 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:07:31.109521  244383 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:07:31.609057  244383 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:07:32.108853  244383 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:07:32.609531  244383 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:07:33.108838  244383 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:07:33.608822  244383 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:07:34.108973  244383 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:07:34.609839  244383 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:07:34.671502  244383 kubeadm.go:1045] duration metric: took 12.206180961s to wait for elevateKubeSystemPrivileges.
	I0601 11:07:34.671537  244383 kubeadm.go:397] StartCluster complete in 27.847858486s
	I0601 11:07:34.671557  244383 settings.go:142] acquiring lock: {Name:mk20a847233fa50399ab0a24280bffb8d8dbd41b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0601 11:07:34.671645  244383 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig
	I0601 11:07:34.673551  244383 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig: {Name:mkb0e16236e54fbc8651999f1dd70854c53de7db Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0601 11:07:35.189278  244383 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "default-k8s-different-port-20220601110654-6708" rescaled to 1
	I0601 11:07:35.189337  244383 start.go:208] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8444 KubernetesVersion:v1.23.6 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0601 11:07:35.191451  244383 out.go:177] * Verifying Kubernetes components...
	I0601 11:07:35.189391  244383 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0601 11:07:35.189390  244383 addons.go:415] enableAddons start: toEnable=map[], additional=[]
	I0601 11:07:35.189576  244383 config.go:178] Loaded profile config "default-k8s-different-port-20220601110654-6708": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.6
	I0601 11:07:35.192926  244383 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0601 11:07:35.192990  244383 addons.go:65] Setting storage-provisioner=true in profile "default-k8s-different-port-20220601110654-6708"
	I0601 11:07:35.193023  244383 addons.go:65] Setting default-storageclass=true in profile "default-k8s-different-port-20220601110654-6708"
	I0601 11:07:35.193071  244383 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-different-port-20220601110654-6708"
	I0601 11:07:35.193025  244383 addons.go:153] Setting addon storage-provisioner=true in "default-k8s-different-port-20220601110654-6708"
	W0601 11:07:35.193134  244383 addons.go:165] addon storage-provisioner should already be in state true
	I0601 11:07:35.193178  244383 host.go:66] Checking if "default-k8s-different-port-20220601110654-6708" exists ...
	I0601 11:07:35.193498  244383 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220601110654-6708 --format={{.State.Status}}
	I0601 11:07:35.193681  244383 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220601110654-6708 --format={{.State.Status}}
	I0601 11:07:35.209430  244383 node_ready.go:35] waiting up to 6m0s for node "default-k8s-different-port-20220601110654-6708" to be "Ready" ...
	I0601 11:07:35.237918  244383 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0601 11:07:35.239410  244383 addons.go:348] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0601 11:07:35.239425  244383 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0601 11:07:35.239470  244383 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220601110654-6708
	I0601 11:07:35.255735  244383 addons.go:153] Setting addon default-storageclass=true in "default-k8s-different-port-20220601110654-6708"
	W0601 11:07:35.255765  244383 addons.go:165] addon default-storageclass should already be in state true
	I0601 11:07:35.255799  244383 host.go:66] Checking if "default-k8s-different-port-20220601110654-6708" exists ...
	I0601 11:07:35.256352  244383 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220601110654-6708 --format={{.State.Status}}
	I0601 11:07:35.277557  244383 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49417 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/default-k8s-different-port-20220601110654-6708/id_rsa Username:docker}
	I0601 11:07:35.290858  244383 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0601 11:07:35.296059  244383 addons.go:348] installing /etc/kubernetes/addons/storageclass.yaml
	I0601 11:07:35.296086  244383 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0601 11:07:35.296137  244383 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220601110654-6708
	I0601 11:07:35.338006  244383 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49417 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/default-k8s-different-port-20220601110654-6708/id_rsa Username:docker}
	I0601 11:07:35.376722  244383 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0601 11:07:35.468185  244383 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0601 11:07:35.653594  244383 start.go:806] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS
	I0601 11:07:35.783515  244383 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0601 11:07:35.784841  244383 addons.go:417] enableAddons completed in 595.455746ms
	I0601 11:07:37.216016  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:07:39.717025  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:07:42.216640  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:07:44.716894  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:07:47.216117  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:07:49.217067  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:07:51.716491  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:07:54.216277  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:07:56.216761  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:07:58.717105  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:08:01.216388  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:08:03.716389  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:08:05.717011  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:08:08.215942  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:08:10.216368  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:08:12.216490  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:08:14.716947  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:08:17.216379  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:08:19.216687  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:08:21.216835  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:08:23.717175  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:08:26.216167  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:08:28.216729  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:08:30.216872  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:08:32.716452  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:08:35.216938  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:08:37.716649  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:08:39.716753  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:08:42.215917  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:08:44.216056  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:08:46.216458  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:08:48.216662  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:08:50.716633  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:08:52.716937  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:08:55.216648  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:08:57.716740  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:09:00.217259  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:09:02.716121  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:09:04.716668  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:09:06.716874  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:09:08.717065  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:09:11.216427  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:09:13.716769  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:09:16.216572  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:09:18.715438  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:09:20.716744  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:09:23.216674  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:09:25.716243  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:09:27.716345  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:09:29.716770  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:09:32.217046  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:09:34.716539  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:09:36.716922  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:09:38.717062  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:09:40.717196  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:09:43.216722  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:09:45.716601  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:09:47.716677  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:09:49.718424  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:09:52.216702  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:09:54.716437  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:09:57.216473  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:09:59.216703  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:10:01.716563  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:10:04.216144  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:10:06.216284  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:10:08.716579  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:10:11.216102  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:10:13.216282  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:10:15.716437  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:10:18.216335  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:10:20.715993  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:10:22.716802  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:10:25.216481  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:10:27.216823  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:10:29.716428  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:10:31.716531  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:10:34.216325  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:10:36.216576  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:10:38.216755  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:10:40.716532  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:10:43.216563  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:10:45.716341  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:10:47.716680  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:10:50.216218  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:10:52.716283  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:10:54.716952  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:10:57.216293  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:10:59.216999  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:11:01.716144  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:11:03.716378  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:11:05.716604  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:11:08.216289  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:11:10.216683  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:11:12.716931  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:11:15.216225  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:11:17.216376  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:11:19.716558  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:11:22.216186  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:11:24.216522  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:11:26.717180  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:11:29.216092  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:11:31.216231  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:11:33.716223  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:11:35.218171  244383 node_ready.go:38] duration metric: took 4m0.008704673s waiting for node "default-k8s-different-port-20220601110654-6708" to be "Ready" ...
	I0601 11:11:35.220452  244383 out.go:177] 
	W0601 11:11:35.221885  244383 out.go:239] X Exiting due to GUEST_START: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: timed out waiting for the condition
	X Exiting due to GUEST_START: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: timed out waiting for the condition
	W0601 11:11:35.221913  244383 out.go:239] * 
	* 
	W0601 11:11:35.222650  244383 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0601 11:11:35.224616  244383 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:190: failed starting minikube -first start-. args "out/minikube-linux-amd64 start -p default-k8s-different-port-20220601110654-6708 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.23.6": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/default-k8s-different-port/serial/FirstStart]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect default-k8s-different-port-20220601110654-6708
helpers_test.go:235: (dbg) docker inspect default-k8s-different-port-20220601110654-6708:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "dccf9935a74c10fb8fa207fbc849bba86fe9f8dff98b2051cc49fbfa90e4ec8b",
	        "Created": "2022-06-01T11:07:03.290503902Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 245161,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-06-01T11:07:03.630929291Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:5fc9565d342f677dd8987c0c7656d8d58147ab45c932c9076935b38a770e4cf7",
	        "ResolvConfPath": "/var/lib/docker/containers/dccf9935a74c10fb8fa207fbc849bba86fe9f8dff98b2051cc49fbfa90e4ec8b/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/dccf9935a74c10fb8fa207fbc849bba86fe9f8dff98b2051cc49fbfa90e4ec8b/hostname",
	        "HostsPath": "/var/lib/docker/containers/dccf9935a74c10fb8fa207fbc849bba86fe9f8dff98b2051cc49fbfa90e4ec8b/hosts",
	        "LogPath": "/var/lib/docker/containers/dccf9935a74c10fb8fa207fbc849bba86fe9f8dff98b2051cc49fbfa90e4ec8b/dccf9935a74c10fb8fa207fbc849bba86fe9f8dff98b2051cc49fbfa90e4ec8b-json.log",
	        "Name": "/default-k8s-different-port-20220601110654-6708",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-different-port-20220601110654-6708:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default-k8s-different-port-20220601110654-6708",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/ec30b29d13ab7f3810bf40e3fa416d096637b34a6f7b5a750bd7d391c0a4008e-init/diff:/var/lib/docker/overlay2/b8d8db1a2aa7e68bc30cc8784920412f67de75d287f26f41df14fe77e9cd01aa/diff:/var/lib/docker/overlay2/e05914bc392f34941f075ddf1f08984ba653e41980a3a12b3f0ec7bc66faed2c/diff:/var/lib/docker/overlay2/311290be1e68cfaf248422398f2ee037b662e83e8a80c393cf1f35af46c871e3/diff:/var/lib/docker/overlay2/df5bd86ed0e0c9740e1a389f0dd435837b730043ff123ee198aa28f82511f21c/diff:/var/lib/docker/overlay2/52ebf0baed5a1f4378d84fd6b94c9c1756fef7c7837d0766f77ef29b2104f39c/diff:/var/lib/docker/overlay2/19fdc64f1948c51309fb85761a84e7787a06f91f43e3d554c0507c5291bda802/diff:/var/lib/docker/overlay2/8fbe94a5f5ec7e7b87e3294d917aaba04c0d5a1072a4876ca3171f7b416e78d1/diff:/var/lib/docker/overlay2/3f2aec9d3b6e64c9251bd84e36639908a944a57741768984d5a66f5f34ed2288/diff:/var/lib/docker/overlay2/f4b59ae0a52d88a0325a281689c43f769408e14f70b9c94b771e70af140d7538/diff:/var/lib/docker/overlay2/1b9610
0ef86fc38fba7ce796c89f6a0f0c16c7fc1a94ff1a7f2021a01dd5471e/diff:/var/lib/docker/overlay2/10388fac28961ac0e67628b98602c8c77b04d12b21cd25a8d9adc05c1261252b/diff:/var/lib/docker/overlay2/efcc44ba0e0e6acd23e74db49106211785c2519f22b858252b43b17e927095e4/diff:/var/lib/docker/overlay2/3dbc9b9ec4e689c631fadb88ec67ad1f44f1059642f0d9e9740fa4523681476a/diff:/var/lib/docker/overlay2/196519dbdfaceb51fe77bd0517b4a726c63133e2f1a44cc71539d859f920996f/diff:/var/lib/docker/overlay2/bf269014ddb2ded291bc1f7a299a168567e8ffb016d7d4ba5ad3681eb1c988ba/diff:/var/lib/docker/overlay2/dc3a1c86dd14e2cba77aa4a9424d61aa34efc817c2c14b8f79d66fb1c19132ea/diff:/var/lib/docker/overlay2/7151cfa81a30a8e2bacf0e7e97eb21e825844e9ee99d4d774c443cf4df3042bf/diff:/var/lib/docker/overlay2/4827c7b9079e215b353e51dd539d7e28bf73341ea0a494df4654e9fd1b53d16c/diff:/var/lib/docker/overlay2/2da0eda04306aaeacd19a5cc85b885230b88e74f1791bdb022ebf4b39d85fae2/diff:/var/lib/docker/overlay2/1a0bdf8971fb0a44ff0e168623dbbb2d84b8e93f6d20d9351efd68470e4d4851/diff:/var/lib/d
ocker/overlay2/9ced6f582fc4ce00064f1f5e6ac922d838648fe94afc8e015c309e04004f10ca/diff:/var/lib/docker/overlay2/dd6d4c3166eb565aff2516953d5a8877a204214f2436d414475132ae70429cf7/diff:/var/lib/docker/overlay2/a1ace060e85891d54b26ff4be9fcbce36ffbb15cc4061eb4ccf0add8f82783df/diff:/var/lib/docker/overlay2/bc8b93bfba93e7da2c573ae8b6405ebff526153f6a8b0659aebaf044dc7e8f43/diff:/var/lib/docker/overlay2/c6292624b658b5761ddc277e4b50f1bd9d32fb9a2ad4a01647d6482fa0d29eb3/diff:/var/lib/docker/overlay2/cfe8f35eeb0a80a3c747eac0dfd9195da1fa3d9f92f0b7866d46f3517f3b10ee/diff:/var/lib/docker/overlay2/90bc3b9378b5761de7575f1a82d48e4b4ebf50af153eafa5a565585e136b87f8/diff:/var/lib/docker/overlay2/e1b928a2483870df7fdf4adb3b4002f9effe1db7fbff925b24005f47290b2915/diff:/var/lib/docker/overlay2/4758c75ab63fd3f43ae7b654bc93566ded69acea0e92caf3043ef6eeeec9ca1b/diff:/var/lib/docker/overlay2/8558da5809877030d87982052e5011fb04b8bb9646d0f3c1d4aa10f2d7926592/diff:/var/lib/docker/overlay2/f6400cfae811f55736f55064c8949e18ac2dc1175a5149bb0382a79fa92
4f3f3/diff:/var/lib/docker/overlay2/875d5278ff8445b84261d4586c6a14fbbd9c13beff1fe9252591162e4d91a0dc/diff:/var/lib/docker/overlay2/12d9f85a229e1386e37fb609020fdcb5535c67ce811012fd646182559e4ee754/diff:/var/lib/docker/overlay2/d5c5b85272d7a8b0b62da488c8cba8d883a0adcfc9f1b2f6ad2f856b4f13e5f7/diff:/var/lib/docker/overlay2/5f6feb9e059c22491e287804f38f097fda984dc9376ba28ae810e13dcf27394f/diff:/var/lib/docker/overlay2/113a715b1135f09b959295e960aeaa36846ad54a6fe46fdd53d061bc3fe114e3/diff:/var/lib/docker/overlay2/265096a57a98130b8073aa41d0a22f0ada5a391943e490ac1c281a634e22cba0/diff:/var/lib/docker/overlay2/15f57e6dc9a6b382240a9335aae067454048b2eb6d0094f2d3c8c115be34311a/diff:/var/lib/docker/overlay2/45ca8d44d46a41b4493f52c0048d08b8d5ff4c1b28233ab8940f6422df1f1273/diff:/var/lib/docker/overlay2/d555005a13c251ef928389a1b06d251a17378cf3ec68af5a4d6c849412d3f69f/diff:/var/lib/docker/overlay2/80b8c06850dacfe2ca4db4e0fa4d2d0dd6997cf495a7f7da9b95a996694144c5/diff",
	                "MergedDir": "/var/lib/docker/overlay2/ec30b29d13ab7f3810bf40e3fa416d096637b34a6f7b5a750bd7d391c0a4008e/merged",
	                "UpperDir": "/var/lib/docker/overlay2/ec30b29d13ab7f3810bf40e3fa416d096637b34a6f7b5a750bd7d391c0a4008e/diff",
	                "WorkDir": "/var/lib/docker/overlay2/ec30b29d13ab7f3810bf40e3fa416d096637b34a6f7b5a750bd7d391c0a4008e/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "default-k8s-different-port-20220601110654-6708",
	                "Source": "/var/lib/docker/volumes/default-k8s-different-port-20220601110654-6708/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-different-port-20220601110654-6708",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-different-port-20220601110654-6708",
	                "name.minikube.sigs.k8s.io": "default-k8s-different-port-20220601110654-6708",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "7855192596bd9f60fe4ad2cd96f599cd40d7bd62bfad35d8e1f5a897e3270f06",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49417"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49416"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49413"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49415"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49414"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/7855192596bd",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-different-port-20220601110654-6708": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "dccf9935a74c",
	                        "default-k8s-different-port-20220601110654-6708"
	                    ],
	                    "NetworkID": "7d52ef0dc0855b59c05da2e66b25f4d0866ad1d653be1fa615e193dd86443771",
	                    "EndpointID": "333c0952bde2fd448463a8d5d563d8e8c8448f605be2cf7fffa411011fe20066",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-different-port-20220601110654-6708 -n default-k8s-different-port-20220601110654-6708
helpers_test.go:244: <<< TestStartStop/group/default-k8s-different-port/serial/FirstStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-different-port/serial/FirstStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-different-port-20220601110654-6708 logs -n 25
helpers_test.go:252: TestStartStop/group/default-k8s-different-port/serial/FirstStart logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------|-------------------------------------------|---------|----------------|---------------------|---------------------|
	| Command |                       Args                        |                  Profile                  |  User   |    Version     |     Start Time      |      End Time       |
	|---------|---------------------------------------------------|-------------------------------------------|---------|----------------|---------------------|---------------------|
	| ssh     | -p                                                | enable-default-cni-20220601104837-6708    | jenkins | v1.26.0-beta.1 | 01 Jun 22 10:57 UTC | 01 Jun 22 10:57 UTC |
	|         | enable-default-cni-20220601104837-6708            |                                           |         |                |                     |                     |
	|         | pgrep -a kubelet                                  |                                           |         |                |                     |                     |
	| delete  | -p                                                | enable-default-cni-20220601104837-6708    | jenkins | v1.26.0-beta.1 | 01 Jun 22 10:58 UTC | 01 Jun 22 10:58 UTC |
	|         | enable-default-cni-20220601104837-6708            |                                           |         |                |                     |                     |
	| start   | -p bridge-20220601104837-6708                     | bridge-20220601104837-6708                | jenkins | v1.26.0-beta.1 | 01 Jun 22 10:57 UTC | 01 Jun 22 10:58 UTC |
	|         | --memory=2048                                     |                                           |         |                |                     |                     |
	|         | --alsologtostderr                                 |                                           |         |                |                     |                     |
	|         | --wait=true --wait-timeout=5m                     |                                           |         |                |                     |                     |
	|         | --cni=bridge --driver=docker                      |                                           |         |                |                     |                     |
	|         | --container-runtime=containerd                    |                                           |         |                |                     |                     |
	| ssh     | -p bridge-20220601104837-6708                     | bridge-20220601104837-6708                | jenkins | v1.26.0-beta.1 | 01 Jun 22 10:58 UTC | 01 Jun 22 10:58 UTC |
	|         | pgrep -a kubelet                                  |                                           |         |                |                     |                     |
	| delete  | -p bridge-20220601104837-6708                     | bridge-20220601104837-6708                | jenkins | v1.26.0-beta.1 | 01 Jun 22 10:58 UTC | 01 Jun 22 10:58 UTC |
	| start   | -p calico-20220601104839-6708                     | calico-20220601104839-6708                | jenkins | v1.26.0-beta.1 | 01 Jun 22 10:57 UTC | 01 Jun 22 10:59 UTC |
	|         | --memory=2048                                     |                                           |         |                |                     |                     |
	|         | --alsologtostderr                                 |                                           |         |                |                     |                     |
	|         | --wait=true --wait-timeout=5m                     |                                           |         |                |                     |                     |
	|         | --cni=calico --driver=docker                      |                                           |         |                |                     |                     |
	|         | --container-runtime=containerd                    |                                           |         |                |                     |                     |
	| ssh     | -p calico-20220601104839-6708                     | calico-20220601104839-6708                | jenkins | v1.26.0-beta.1 | 01 Jun 22 10:59 UTC | 01 Jun 22 10:59 UTC |
	|         | pgrep -a kubelet                                  |                                           |         |                |                     |                     |
	| start   | -p cilium-20220601104839-6708                     | cilium-20220601104839-6708                | jenkins | v1.26.0-beta.1 | 01 Jun 22 10:58 UTC | 01 Jun 22 10:59 UTC |
	|         | --memory=2048                                     |                                           |         |                |                     |                     |
	|         | --alsologtostderr                                 |                                           |         |                |                     |                     |
	|         | --wait=true --wait-timeout=5m                     |                                           |         |                |                     |                     |
	|         | --cni=cilium --driver=docker                      |                                           |         |                |                     |                     |
	|         | --container-runtime=containerd                    |                                           |         |                |                     |                     |
	| ssh     | -p cilium-20220601104839-6708                     | cilium-20220601104839-6708                | jenkins | v1.26.0-beta.1 | 01 Jun 22 10:59 UTC | 01 Jun 22 10:59 UTC |
	|         | pgrep -a kubelet                                  |                                           |         |                |                     |                     |
	| delete  | -p cilium-20220601104839-6708                     | cilium-20220601104839-6708                | jenkins | v1.26.0-beta.1 | 01 Jun 22 10:59 UTC | 01 Jun 22 10:59 UTC |
	| start   | -p                                                | no-preload-20220601105939-6708            | jenkins | v1.26.0-beta.1 | 01 Jun 22 10:59 UTC | 01 Jun 22 11:00 UTC |
	|         | no-preload-20220601105939-6708                    |                                           |         |                |                     |                     |
	|         | --memory=2200                                     |                                           |         |                |                     |                     |
	|         | --alsologtostderr                                 |                                           |         |                |                     |                     |
	|         | --wait=true --preload=false                       |                                           |         |                |                     |                     |
	|         | --driver=docker                                   |                                           |         |                |                     |                     |
	|         | --container-runtime=containerd                    |                                           |         |                |                     |                     |
	|         | --kubernetes-version=v1.23.6                      |                                           |         |                |                     |                     |
	| addons  | enable metrics-server -p                          | no-preload-20220601105939-6708            | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:00 UTC | 01 Jun 22 11:00 UTC |
	|         | no-preload-20220601105939-6708                    |                                           |         |                |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                           |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain            |                                           |         |                |                     |                     |
	| stop    | -p                                                | no-preload-20220601105939-6708            | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:00 UTC | 01 Jun 22 11:01 UTC |
	|         | no-preload-20220601105939-6708                    |                                           |         |                |                     |                     |
	|         | --alsologtostderr -v=3                            |                                           |         |                |                     |                     |
	| addons  | enable dashboard -p                               | no-preload-20220601105939-6708            | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:01 UTC | 01 Jun 22 11:01 UTC |
	|         | no-preload-20220601105939-6708                    |                                           |         |                |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                           |         |                |                     |                     |
	| logs    | auto-20220601104837-6708 logs                     | auto-20220601104837-6708                  | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:03 UTC | 01 Jun 22 11:03 UTC |
	|         | -n 25                                             |                                           |         |                |                     |                     |
	| delete  | -p auto-20220601104837-6708                       | auto-20220601104837-6708                  | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:03 UTC | 01 Jun 22 11:03 UTC |
	| logs    | old-k8s-version-20220601105850-6708               | old-k8s-version-20220601105850-6708       | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:03 UTC | 01 Jun 22 11:03 UTC |
	|         | logs -n 25                                        |                                           |         |                |                     |                     |
	| start   | -p                                                | no-preload-20220601105939-6708            | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:01 UTC | 01 Jun 22 11:06 UTC |
	|         | no-preload-20220601105939-6708                    |                                           |         |                |                     |                     |
	|         | --memory=2200                                     |                                           |         |                |                     |                     |
	|         | --alsologtostderr                                 |                                           |         |                |                     |                     |
	|         | --wait=true --preload=false                       |                                           |         |                |                     |                     |
	|         | --driver=docker                                   |                                           |         |                |                     |                     |
	|         | --container-runtime=containerd                    |                                           |         |                |                     |                     |
	|         | --kubernetes-version=v1.23.6                      |                                           |         |                |                     |                     |
	| ssh     | -p                                                | no-preload-20220601105939-6708            | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:06 UTC | 01 Jun 22 11:06 UTC |
	|         | no-preload-20220601105939-6708                    |                                           |         |                |                     |                     |
	|         | sudo crictl images -o json                        |                                           |         |                |                     |                     |
	| pause   | -p                                                | no-preload-20220601105939-6708            | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:06 UTC | 01 Jun 22 11:06 UTC |
	|         | no-preload-20220601105939-6708                    |                                           |         |                |                     |                     |
	|         | --alsologtostderr -v=1                            |                                           |         |                |                     |                     |
	| unpause | -p                                                | no-preload-20220601105939-6708            | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:06 UTC | 01 Jun 22 11:06 UTC |
	|         | no-preload-20220601105939-6708                    |                                           |         |                |                     |                     |
	|         | --alsologtostderr -v=1                            |                                           |         |                |                     |                     |
	| delete  | -p                                                | no-preload-20220601105939-6708            | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:06 UTC | 01 Jun 22 11:06 UTC |
	|         | no-preload-20220601105939-6708                    |                                           |         |                |                     |                     |
	| delete  | -p                                                | no-preload-20220601105939-6708            | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:06 UTC | 01 Jun 22 11:06 UTC |
	|         | no-preload-20220601105939-6708                    |                                           |         |                |                     |                     |
	| delete  | -p                                                | disable-driver-mounts-20220601110654-6708 | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:06 UTC | 01 Jun 22 11:06 UTC |
	|         | disable-driver-mounts-20220601110654-6708         |                                           |         |                |                     |                     |
	| logs    | embed-certs-20220601110327-6708                   | embed-certs-20220601110327-6708           | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:08 UTC | 01 Jun 22 11:08 UTC |
	|         | logs -n 25                                        |                                           |         |                |                     |                     |
	|---------|---------------------------------------------------|-------------------------------------------|---------|----------------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/06/01 11:06:54
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.18.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0601 11:06:54.667302  244383 out.go:296] Setting OutFile to fd 1 ...
	I0601 11:06:54.667430  244383 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0601 11:06:54.667448  244383 out.go:309] Setting ErrFile to fd 2...
	I0601 11:06:54.667455  244383 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0601 11:06:54.667611  244383 root.go:322] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/bin
	I0601 11:06:54.668037  244383 out.go:303] Setting JSON to false
	I0601 11:06:54.669846  244383 start.go:115] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":2969,"bootTime":1654078646,"procs":645,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.13.0-1027-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0601 11:06:54.669914  244383 start.go:125] virtualization: kvm guest
	I0601 11:06:54.672039  244383 out.go:177] * [default-k8s-different-port-20220601110654-6708] minikube v1.26.0-beta.1 on Ubuntu 20.04 (kvm/amd64)
	I0601 11:06:54.673519  244383 out.go:177]   - MINIKUBE_LOCATION=14079
	I0601 11:06:54.673532  244383 notify.go:193] Checking for updates...
	I0601 11:06:54.676498  244383 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0601 11:06:54.678066  244383 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig
	I0601 11:06:54.679578  244383 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube
	I0601 11:06:54.681049  244383 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0601 11:06:54.682891  244383 config.go:178] Loaded profile config "calico-20220601104839-6708": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.6
	I0601 11:06:54.683008  244383 config.go:178] Loaded profile config "embed-certs-20220601110327-6708": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.6
	I0601 11:06:54.683105  244383 config.go:178] Loaded profile config "old-k8s-version-20220601105850-6708": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.16.0
	I0601 11:06:54.683158  244383 driver.go:358] Setting default libvirt URI to qemu:///system
	I0601 11:06:54.724298  244383 docker.go:137] docker version: linux-20.10.16
	I0601 11:06:54.724374  244383 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0601 11:06:54.826819  244383 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:76 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:48 OomKillDisable:true NGoroutines:49 SystemTime:2022-06-01 11:06:54.7540349 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1027-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexServe
rAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662795776 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:20.10.16 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] ClientI
nfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0601 11:06:54.826932  244383 docker.go:254] overlay module found
	I0601 11:06:54.829003  244383 out.go:177] * Using the docker driver based on user configuration
	I0601 11:06:54.830315  244383 start.go:284] selected driver: docker
	I0601 11:06:54.830327  244383 start.go:806] validating driver "docker" against <nil>
	I0601 11:06:54.830352  244383 start.go:817] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0601 11:06:54.831265  244383 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0601 11:06:54.931062  244383 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:76 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:48 OomKillDisable:true NGoroutines:49 SystemTime:2022-06-01 11:06:54.859997014 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1027-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662795776 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:20.10.16 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0601 11:06:54.931188  244383 start_flags.go:292] no existing cluster config was found, will generate one from the flags 
	I0601 11:06:54.931414  244383 start_flags.go:847] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0601 11:06:54.933788  244383 out.go:177] * Using Docker driver with the root privilege
	I0601 11:06:54.935205  244383 cni.go:95] Creating CNI manager for ""
	I0601 11:06:54.935218  244383 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0601 11:06:54.935233  244383 cni.go:225] auto-setting extra-config to "kubelet.cni-conf-dir=/etc/cni/net.mk"
	I0601 11:06:54.935238  244383 cni.go:230] extra-config set to "kubelet.cni-conf-dir=/etc/cni/net.mk"
	I0601 11:06:54.935243  244383 start_flags.go:301] Found "CNI" CNI - setting NetworkPlugin=cni
	I0601 11:06:54.935250  244383 start_flags.go:306] config:
	{Name:default-k8s-different-port-20220601110654-6708 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:default-k8s-different-port-20220601110654-6708 Namespace:default APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0601 11:06:54.936846  244383 out.go:177] * Starting control plane node default-k8s-different-port-20220601110654-6708 in cluster default-k8s-different-port-20220601110654-6708
	I0601 11:06:54.938038  244383 cache.go:120] Beginning downloading kic base image for docker with containerd
	I0601 11:06:54.939519  244383 out.go:177] * Pulling base image ...
	I0601 11:06:54.940856  244383 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime containerd
	I0601 11:06:54.940881  244383 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a in local docker daemon
	I0601 11:06:54.940905  244383 preload.go:148] Found local preload: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.6-containerd-overlay2-amd64.tar.lz4
	I0601 11:06:54.940928  244383 cache.go:57] Caching tarball of preloaded images
	I0601 11:06:54.941154  244383 preload.go:174] Found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.6-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0601 11:06:54.941186  244383 cache.go:60] Finished verifying existence of preloaded tar for  v1.23.6 on containerd
	I0601 11:06:54.941308  244383 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/default-k8s-different-port-20220601110654-6708/config.json ...
	I0601 11:06:54.941333  244383 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/default-k8s-different-port-20220601110654-6708/config.json: {Name:mk8b3d87cba3844f82b835b906c4fc7fcf103163 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0601 11:06:54.986323  244383 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a in local docker daemon, skipping pull
	I0601 11:06:54.986351  244383 cache.go:141] gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a exists in daemon, skipping load
	I0601 11:06:54.986370  244383 cache.go:206] Successfully downloaded all kic artifacts
	I0601 11:06:54.986406  244383 start.go:352] acquiring machines lock for default-k8s-different-port-20220601110654-6708: {Name:mk7500f636009412c286b3a5b3a2182fb6b229b2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0601 11:06:54.986553  244383 start.go:356] acquired machines lock for "default-k8s-different-port-20220601110654-6708" in 123.17µs
	I0601 11:06:54.986588  244383 start.go:91] Provisioning new machine with config: &{Name:default-k8s-different-port-20220601110654-6708 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:default-k8s-different-por
t-20220601110654-6708 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.23.6 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:do
cker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false} &{Name: IP: Port:8444 KubernetesVersion:v1.23.6 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0601 11:06:54.986696  244383 start.go:131] createHost starting for "" (driver="docker")
	I0601 11:06:54.668423  232046 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:06:57.168205  232046 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:06:54.989283  244383 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0601 11:06:54.989495  244383 start.go:165] libmachine.API.Create for "default-k8s-different-port-20220601110654-6708" (driver="docker")
	I0601 11:06:54.989523  244383 client.go:168] LocalClient.Create starting
	I0601 11:06:54.989576  244383 main.go:134] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca.pem
	I0601 11:06:54.989602  244383 main.go:134] libmachine: Decoding PEM data...
	I0601 11:06:54.989620  244383 main.go:134] libmachine: Parsing certificate...
	I0601 11:06:54.989670  244383 main.go:134] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/cert.pem
	I0601 11:06:54.989686  244383 main.go:134] libmachine: Decoding PEM data...
	I0601 11:06:54.989697  244383 main.go:134] libmachine: Parsing certificate...
	I0601 11:06:54.990003  244383 cli_runner.go:164] Run: docker network inspect default-k8s-different-port-20220601110654-6708 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0601 11:06:55.021531  244383 cli_runner.go:211] docker network inspect default-k8s-different-port-20220601110654-6708 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0601 11:06:55.021592  244383 network_create.go:272] running [docker network inspect default-k8s-different-port-20220601110654-6708] to gather additional debugging logs...
	I0601 11:06:55.021618  244383 cli_runner.go:164] Run: docker network inspect default-k8s-different-port-20220601110654-6708
	W0601 11:06:55.051948  244383 cli_runner.go:211] docker network inspect default-k8s-different-port-20220601110654-6708 returned with exit code 1
	I0601 11:06:55.051984  244383 network_create.go:275] error running [docker network inspect default-k8s-different-port-20220601110654-6708]: docker network inspect default-k8s-different-port-20220601110654-6708: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: default-k8s-different-port-20220601110654-6708
	I0601 11:06:55.052003  244383 network_create.go:277] output of [docker network inspect default-k8s-different-port-20220601110654-6708]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: default-k8s-different-port-20220601110654-6708
	
	** /stderr **
	I0601 11:06:55.052049  244383 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0601 11:06:55.083654  244383 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc0001322e0] misses:0}
	I0601 11:06:55.083702  244383 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0601 11:06:55.083718  244383 network_create.go:115] attempt to create docker network default-k8s-different-port-20220601110654-6708 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0601 11:06:55.083760  244383 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true default-k8s-different-port-20220601110654-6708
	I0601 11:06:55.150185  244383 network_create.go:99] docker network default-k8s-different-port-20220601110654-6708 192.168.49.0/24 created
	I0601 11:06:55.150232  244383 kic.go:106] calculated static IP "192.168.49.2" for the "default-k8s-different-port-20220601110654-6708" container
	I0601 11:06:55.150301  244383 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0601 11:06:55.185029  244383 cli_runner.go:164] Run: docker volume create default-k8s-different-port-20220601110654-6708 --label name.minikube.sigs.k8s.io=default-k8s-different-port-20220601110654-6708 --label created_by.minikube.sigs.k8s.io=true
	I0601 11:06:55.218896  244383 oci.go:103] Successfully created a docker volume default-k8s-different-port-20220601110654-6708
	I0601 11:06:55.218982  244383 cli_runner.go:164] Run: docker run --rm --name default-k8s-different-port-20220601110654-6708-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-different-port-20220601110654-6708 --entrypoint /usr/bin/test -v default-k8s-different-port-20220601110654-6708:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a -d /var/lib
	I0601 11:06:55.773802  244383 oci.go:107] Successfully prepared a docker volume default-k8s-different-port-20220601110654-6708
	I0601 11:06:55.773849  244383 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime containerd
	I0601 11:06:55.773871  244383 kic.go:179] Starting extracting preloaded images to volume ...
	I0601 11:06:55.773932  244383 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.6-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v default-k8s-different-port-20220601110654-6708:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a -I lz4 -xf /preloaded.tar -C /extractDir
	I0601 11:06:59.334049  232046 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:07:01.667968  232046 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:07:03.152484  244383 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.6-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v default-k8s-different-port-20220601110654-6708:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a -I lz4 -xf /preloaded.tar -C /extractDir: (7.378487132s)
	I0601 11:07:03.152523  244383 kic.go:188] duration metric: took 7.378645 seconds to extract preloaded images to volume
	W0601 11:07:03.152655  244383 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0601 11:07:03.152754  244383 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0601 11:07:03.258344  244383 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname default-k8s-different-port-20220601110654-6708 --name default-k8s-different-port-20220601110654-6708 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-different-port-20220601110654-6708 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=default-k8s-different-port-20220601110654-6708 --network default-k8s-different-port-20220601110654-6708 --ip 192.168.49.2 --volume default-k8s-different-port-20220601110654-6708:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8444 --publish=127.0.0.1::8444 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b3
2c52a
	I0601 11:07:03.640637  244383 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220601110654-6708 --format={{.State.Running}}
	I0601 11:07:03.675247  244383 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220601110654-6708 --format={{.State.Status}}
	I0601 11:07:03.707758  244383 cli_runner.go:164] Run: docker exec default-k8s-different-port-20220601110654-6708 stat /var/lib/dpkg/alternatives/iptables
	I0601 11:07:03.767985  244383 oci.go:247] the created container "default-k8s-different-port-20220601110654-6708" has a running status.
	I0601 11:07:03.768013  244383 kic.go:210] Creating ssh key for kic: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/default-k8s-different-port-20220601110654-6708/id_rsa...
	I0601 11:07:03.823786  244383 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/default-k8s-different-port-20220601110654-6708/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0601 11:07:03.917787  244383 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220601110654-6708 --format={{.State.Status}}
	I0601 11:07:03.956706  244383 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0601 11:07:03.956735  244383 kic_runner.go:114] Args: [docker exec --privileged default-k8s-different-port-20220601110654-6708 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0601 11:07:04.044516  244383 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220601110654-6708 --format={{.State.Status}}
	I0601 11:07:04.081442  244383 machine.go:88] provisioning docker machine ...
	I0601 11:07:04.081477  244383 ubuntu.go:169] provisioning hostname "default-k8s-different-port-20220601110654-6708"
	I0601 11:07:04.081535  244383 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220601110654-6708
	I0601 11:07:04.119200  244383 main.go:134] libmachine: Using SSH client type: native
	I0601 11:07:04.119405  244383 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7da240] 0x7dd2a0 <nil>  [] 0s} 127.0.0.1 49417 <nil> <nil>}
	I0601 11:07:04.119425  244383 main.go:134] libmachine: About to run SSH command:
	sudo hostname default-k8s-different-port-20220601110654-6708 && echo "default-k8s-different-port-20220601110654-6708" | sudo tee /etc/hostname
	I0601 11:07:04.249668  244383 main.go:134] libmachine: SSH cmd err, output: <nil>: default-k8s-different-port-20220601110654-6708
	
	I0601 11:07:04.249734  244383 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220601110654-6708
	I0601 11:07:04.283443  244383 main.go:134] libmachine: Using SSH client type: native
	I0601 11:07:04.283593  244383 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7da240] 0x7dd2a0 <nil>  [] 0s} 127.0.0.1 49417 <nil> <nil>}
	I0601 11:07:04.283628  244383 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-different-port-20220601110654-6708' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-different-port-20220601110654-6708/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-different-port-20220601110654-6708' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0601 11:07:04.395587  244383 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0601 11:07:04.395617  244383 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/c
erts/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube}
	I0601 11:07:04.395643  244383 ubuntu.go:177] setting up certificates
	I0601 11:07:04.395652  244383 provision.go:83] configureAuth start
	I0601 11:07:04.395697  244383 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-different-port-20220601110654-6708
	I0601 11:07:04.427413  244383 provision.go:138] copyHostCerts
	I0601 11:07:04.427469  244383 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/key.pem, removing ...
	I0601 11:07:04.427481  244383 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/key.pem
	I0601 11:07:04.427543  244383 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/key.pem (1679 bytes)
	I0601 11:07:04.427622  244383 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.pem, removing ...
	I0601 11:07:04.427632  244383 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.pem
	I0601 11:07:04.427659  244383 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.pem (1082 bytes)
	I0601 11:07:04.427708  244383 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cert.pem, removing ...
	I0601 11:07:04.427721  244383 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cert.pem
	I0601 11:07:04.427753  244383 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cert.pem (1123 bytes)
	I0601 11:07:04.427802  244383 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca-key.pem org=jenkins.default-k8s-different-port-20220601110654-6708 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube default-k8s-different-port-20220601110654-6708]
	I0601 11:07:04.535631  244383 provision.go:172] copyRemoteCerts
	I0601 11:07:04.535685  244383 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0601 11:07:04.535726  244383 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220601110654-6708
	I0601 11:07:04.568780  244383 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49417 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/default-k8s-different-port-20220601110654-6708/id_rsa Username:docker}
	I0601 11:07:04.659152  244383 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0601 11:07:04.676610  244383 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/server.pem --> /etc/docker/server.pem (1306 bytes)
	I0601 11:07:04.694731  244383 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0601 11:07:04.711549  244383 provision.go:86] duration metric: configureAuth took 315.887909ms
	I0601 11:07:04.711573  244383 ubuntu.go:193] setting minikube options for container-runtime
	I0601 11:07:04.711735  244383 config.go:178] Loaded profile config "default-k8s-different-port-20220601110654-6708": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.6
	I0601 11:07:04.711748  244383 machine.go:91] provisioned docker machine in 630.288068ms
	I0601 11:07:04.711754  244383 client.go:171] LocalClient.Create took 9.722222745s
	I0601 11:07:04.711778  244383 start.go:173] duration metric: libmachine.API.Create for "default-k8s-different-port-20220601110654-6708" took 9.722275215s
	I0601 11:07:04.711793  244383 start.go:306] post-start starting for "default-k8s-different-port-20220601110654-6708" (driver="docker")
	I0601 11:07:04.711800  244383 start.go:316] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0601 11:07:04.711844  244383 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0601 11:07:04.711903  244383 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220601110654-6708
	I0601 11:07:04.745536  244383 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49417 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/default-k8s-different-port-20220601110654-6708/id_rsa Username:docker}
	I0601 11:07:04.831037  244383 ssh_runner.go:195] Run: cat /etc/os-release
	I0601 11:07:04.833655  244383 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0601 11:07:04.833679  244383 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0601 11:07:04.833703  244383 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0601 11:07:04.833716  244383 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0601 11:07:04.833726  244383 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/addons for local assets ...
	I0601 11:07:04.833775  244383 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files for local assets ...
	I0601 11:07:04.833870  244383 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files/etc/ssl/certs/67082.pem -> 67082.pem in /etc/ssl/certs
	I0601 11:07:04.833975  244383 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0601 11:07:04.840420  244383 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files/etc/ssl/certs/67082.pem --> /etc/ssl/certs/67082.pem (1708 bytes)
	I0601 11:07:04.857187  244383 start.go:309] post-start completed in 145.384397ms
	I0601 11:07:04.857493  244383 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-different-port-20220601110654-6708
	I0601 11:07:04.888747  244383 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/default-k8s-different-port-20220601110654-6708/config.json ...
	I0601 11:07:04.888963  244383 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0601 11:07:04.889000  244383 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220601110654-6708
	I0601 11:07:04.919352  244383 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49417 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/default-k8s-different-port-20220601110654-6708/id_rsa Username:docker}
	I0601 11:07:05.000243  244383 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0601 11:07:05.004050  244383 start.go:134] duration metric: createHost completed in 10.017341223s
	I0601 11:07:05.004075  244383 start.go:81] releasing machines lock for "default-k8s-different-port-20220601110654-6708", held for 10.017502791s
	I0601 11:07:05.004171  244383 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-different-port-20220601110654-6708
	I0601 11:07:05.035905  244383 ssh_runner.go:195] Run: systemctl --version
	I0601 11:07:05.035960  244383 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220601110654-6708
	I0601 11:07:05.035972  244383 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0601 11:07:05.036031  244383 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220601110654-6708
	I0601 11:07:05.069327  244383 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49417 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/default-k8s-different-port-20220601110654-6708/id_rsa Username:docker}
	I0601 11:07:05.070632  244383 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49417 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/default-k8s-different-port-20220601110654-6708/id_rsa Username:docker}
	I0601 11:07:05.175990  244383 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0601 11:07:05.186279  244383 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0601 11:07:05.194913  244383 docker.go:187] disabling docker service ...
	I0601 11:07:05.194953  244383 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0601 11:07:05.211132  244383 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0601 11:07:05.219763  244383 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0601 11:07:05.302855  244383 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0601 11:07:05.379942  244383 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0601 11:07:05.388684  244383 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0601 11:07:05.401125  244383 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*sandbox_image = .*$|sandbox_image = "k8s.gcr.io/pause:3.6"|' -i /etc/containerd/config.toml"
	I0601 11:07:05.408798  244383 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*restrict_oom_score_adj = .*$|restrict_oom_score_adj = false|' -i /etc/containerd/config.toml"
	I0601 11:07:05.416626  244383 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*SystemdCgroup = .*$|SystemdCgroup = false|' -i /etc/containerd/config.toml"
	I0601 11:07:05.424218  244383 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*conf_dir = .*$|conf_dir = "/etc/cni/net.mk"|' -i /etc/containerd/config.toml"
	I0601 11:07:05.431786  244383 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^# imports|imports = ["/etc/containerd/containerd.conf.d/02-containerd.conf"]|' -i /etc/containerd/config.toml"
	I0601 11:07:05.439234  244383 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc/containerd/containerd.conf.d && printf %!s(MISSING) "dmVyc2lvbiA9IDIK" | base64 -d | sudo tee /etc/containerd/containerd.conf.d/02-containerd.conf"
	I0601 11:07:05.451481  244383 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0601 11:07:05.457796  244383 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0601 11:07:05.464201  244383 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0601 11:07:05.540478  244383 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0601 11:07:05.650499  244383 start.go:447] Will wait 60s for socket path /run/containerd/containerd.sock
	I0601 11:07:05.650567  244383 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0601 11:07:05.654052  244383 start.go:468] Will wait 60s for crictl version
	I0601 11:07:05.654103  244383 ssh_runner.go:195] Run: sudo crictl version
	I0601 11:07:05.681128  244383 start.go:477] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.6.4
	RuntimeApiVersion:  v1alpha2
	I0601 11:07:05.681188  244383 ssh_runner.go:195] Run: containerd --version
	I0601 11:07:05.710828  244383 ssh_runner.go:195] Run: containerd --version
	I0601 11:07:05.741779  244383 out.go:177] * Preparing Kubernetes v1.23.6 on containerd 1.6.4 ...
	I0601 11:07:05.743207  244383 cli_runner.go:164] Run: docker network inspect default-k8s-different-port-20220601110654-6708 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0601 11:07:05.773719  244383 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0601 11:07:05.777293  244383 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0601 11:07:05.788623  244383 out.go:177]   - kubelet.cni-conf-dir=/etc/cni/net.mk
	I0601 11:07:05.790049  244383 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime containerd
	I0601 11:07:05.790117  244383 ssh_runner.go:195] Run: sudo crictl images --output json
	I0601 11:07:05.812809  244383 containerd.go:547] all images are preloaded for containerd runtime.
	I0601 11:07:05.812831  244383 containerd.go:461] Images already preloaded, skipping extraction
	I0601 11:07:05.812869  244383 ssh_runner.go:195] Run: sudo crictl images --output json
	I0601 11:07:05.834860  244383 containerd.go:547] all images are preloaded for containerd runtime.
	I0601 11:07:05.834879  244383 cache_images.go:84] Images are preloaded, skipping loading
	I0601 11:07:05.834947  244383 ssh_runner.go:195] Run: sudo crictl info
	I0601 11:07:05.857173  244383 cni.go:95] Creating CNI manager for ""
	I0601 11:07:05.857192  244383 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0601 11:07:05.857218  244383 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0601 11:07:05.857235  244383 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8444 KubernetesVersion:v1.23.6 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-different-port-20220601110654-6708 NodeName:default-k8s-different-port-20220601110654-6708 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.49.2
CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0601 11:07:05.857383  244383 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "default-k8s-different-port-20220601110654-6708"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.23.6
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0601 11:07:05.857471  244383 kubeadm.go:961] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.23.6/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cni-conf-dir=/etc/cni/net.mk --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=default-k8s-different-port-20220601110654-6708 --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.49.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.23.6 ClusterName:default-k8s-different-port-20220601110654-6708 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:}
	I0601 11:07:05.857530  244383 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.23.6
	I0601 11:07:05.864412  244383 binaries.go:44] Found k8s binaries, skipping transfer
	I0601 11:07:05.864485  244383 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0601 11:07:05.870921  244383 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (591 bytes)
	I0601 11:07:05.883133  244383 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0601 11:07:05.896240  244383 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2075 bytes)
	I0601 11:07:05.908996  244383 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0601 11:07:05.911816  244383 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0601 11:07:05.920740  244383 certs.go:54] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/default-k8s-different-port-20220601110654-6708 for IP: 192.168.49.2
	I0601 11:07:05.920863  244383 certs.go:182] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.key
	I0601 11:07:05.920906  244383 certs.go:182] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/proxy-client-ca.key
	I0601 11:07:05.920964  244383 certs.go:302] generating minikube-user signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/default-k8s-different-port-20220601110654-6708/client.key
	I0601 11:07:05.920984  244383 crypto.go:68] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/default-k8s-different-port-20220601110654-6708/client.crt with IP's: []
	I0601 11:07:06.190511  244383 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/default-k8s-different-port-20220601110654-6708/client.crt ...
	I0601 11:07:06.190541  244383 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/default-k8s-different-port-20220601110654-6708/client.crt: {Name:mk1f0de9f338c1565864d345295f211cd6b42042 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0601 11:07:06.190751  244383 crypto.go:164] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/default-k8s-different-port-20220601110654-6708/client.key ...
	I0601 11:07:06.190766  244383 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/default-k8s-different-port-20220601110654-6708/client.key: {Name:mk3abd1ec1bc2a3303283efb1d56bffeb558d491 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0601 11:07:06.190855  244383 certs.go:302] generating minikube signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/default-k8s-different-port-20220601110654-6708/apiserver.key.dd3b5fb2
	I0601 11:07:06.190870  244383 crypto.go:68] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/default-k8s-different-port-20220601110654-6708/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0601 11:07:06.411949  244383 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/default-k8s-different-port-20220601110654-6708/apiserver.crt.dd3b5fb2 ...
	I0601 11:07:06.411982  244383 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/default-k8s-different-port-20220601110654-6708/apiserver.crt.dd3b5fb2: {Name:mk21c89d2fdd1fdc207dd136def37f5d90a62bd7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0601 11:07:06.412202  244383 crypto.go:164] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/default-k8s-different-port-20220601110654-6708/apiserver.key.dd3b5fb2 ...
	I0601 11:07:06.412221  244383 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/default-k8s-different-port-20220601110654-6708/apiserver.key.dd3b5fb2: {Name:mk2f4aae6eb49e6251c3e6c8e6f0f6462f382896 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0601 11:07:06.412314  244383 certs.go:320] copying /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/default-k8s-different-port-20220601110654-6708/apiserver.crt.dd3b5fb2 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/default-k8s-different-port-20220601110654-6708/apiserver.crt
	I0601 11:07:06.412369  244383 certs.go:324] copying /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/default-k8s-different-port-20220601110654-6708/apiserver.key.dd3b5fb2 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/default-k8s-different-port-20220601110654-6708/apiserver.key
	I0601 11:07:06.412451  244383 certs.go:302] generating aggregator signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/default-k8s-different-port-20220601110654-6708/proxy-client.key
	I0601 11:07:06.412469  244383 crypto.go:68] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/default-k8s-different-port-20220601110654-6708/proxy-client.crt with IP's: []
	I0601 11:07:06.545552  244383 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/default-k8s-different-port-20220601110654-6708/proxy-client.crt ...
	I0601 11:07:06.545619  244383 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/default-k8s-different-port-20220601110654-6708/proxy-client.crt: {Name:mkee564e3149cd8be755ca3cbe99f47feac8e4a7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0601 11:07:06.545807  244383 crypto.go:164] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/default-k8s-different-port-20220601110654-6708/proxy-client.key ...
	I0601 11:07:06.545819  244383 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/default-k8s-different-port-20220601110654-6708/proxy-client.key: {Name:mk3354416a46b334b24512eafd987800637af3d8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0601 11:07:06.547104  244383 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/6708.pem (1338 bytes)
	W0601 11:07:06.547148  244383 certs.go:384] ignoring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/6708_empty.pem, impossibly tiny 0 bytes
	I0601 11:07:06.547174  244383 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca-key.pem (1679 bytes)
	I0601 11:07:06.547194  244383 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca.pem (1082 bytes)
	I0601 11:07:06.547234  244383 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/cert.pem (1123 bytes)
	I0601 11:07:06.547271  244383 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/key.pem (1679 bytes)
	I0601 11:07:06.547327  244383 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files/etc/ssl/certs/67082.pem (1708 bytes)
	I0601 11:07:06.547961  244383 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/default-k8s-different-port-20220601110654-6708/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0601 11:07:06.565921  244383 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/default-k8s-different-port-20220601110654-6708/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0601 11:07:06.584089  244383 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/default-k8s-different-port-20220601110654-6708/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0601 11:07:06.601191  244383 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/default-k8s-different-port-20220601110654-6708/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0601 11:07:06.618465  244383 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0601 11:07:06.635815  244383 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0601 11:07:06.653212  244383 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0601 11:07:06.670886  244383 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0601 11:07:06.687801  244383 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/6708.pem --> /usr/share/ca-certificates/6708.pem (1338 bytes)
	I0601 11:07:06.704953  244383 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files/etc/ssl/certs/67082.pem --> /usr/share/ca-certificates/67082.pem (1708 bytes)
	I0601 11:07:06.721444  244383 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0601 11:07:06.737875  244383 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0601 11:07:06.751738  244383 ssh_runner.go:195] Run: openssl version
	I0601 11:07:06.756719  244383 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/67082.pem && ln -fs /usr/share/ca-certificates/67082.pem /etc/ssl/certs/67082.pem"
	I0601 11:07:06.764146  244383 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/67082.pem
	I0601 11:07:06.767163  244383 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Jun  1 10:24 /usr/share/ca-certificates/67082.pem
	I0601 11:07:06.767216  244383 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/67082.pem
	I0601 11:07:06.771914  244383 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/67082.pem /etc/ssl/certs/3ec20f2e.0"
	I0601 11:07:06.778934  244383 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0601 11:07:06.786568  244383 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0601 11:07:06.789545  244383 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Jun  1 10:20 /usr/share/ca-certificates/minikubeCA.pem
	I0601 11:07:06.789607  244383 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0601 11:07:06.794248  244383 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0601 11:07:06.801364  244383 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6708.pem && ln -fs /usr/share/ca-certificates/6708.pem /etc/ssl/certs/6708.pem"
	I0601 11:07:06.808247  244383 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6708.pem
	I0601 11:07:06.811196  244383 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Jun  1 10:24 /usr/share/ca-certificates/6708.pem
	I0601 11:07:06.811252  244383 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6708.pem
	I0601 11:07:06.816241  244383 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6708.pem /etc/ssl/certs/51391683.0"
	I0601 11:07:06.823684  244383 kubeadm.go:395] StartCluster: {Name:default-k8s-different-port-20220601110654-6708 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:default-k8s-different-port-20220601110654-6708
Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8444 KubernetesVersion:v1.23.6 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mount
IP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0601 11:07:06.823768  244383 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0601 11:07:06.823809  244383 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0601 11:07:06.847418  244383 cri.go:87] found id: ""
	I0601 11:07:06.847481  244383 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0601 11:07:06.854612  244383 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0601 11:07:06.861596  244383 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0601 11:07:06.861652  244383 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0601 11:07:06.868516  244383 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0601 11:07:06.868568  244383 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0601 11:07:03.668636  232046 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:07:06.167338  232046 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:07:07.121183  244383 out.go:204]   - Generating certificates and keys ...
	I0601 11:07:09.218861  244383 out.go:204]   - Booting up control plane ...
	I0601 11:07:08.167714  232046 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:07:10.168162  232046 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:07:12.667278  232046 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:07:14.668246  232046 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:07:17.168197  232046 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:07:21.259795  244383 out.go:204]   - Configuring RBAC rules ...
	I0601 11:07:21.672636  244383 cni.go:95] Creating CNI manager for ""
	I0601 11:07:21.672654  244383 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0601 11:07:21.674533  244383 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0601 11:07:19.668390  232046 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:07:21.668490  232046 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:07:21.675845  244383 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0601 11:07:21.679515  244383 cni.go:189] applying CNI manifest using /var/lib/minikube/binaries/v1.23.6/kubectl ...
	I0601 11:07:21.679534  244383 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0601 11:07:21.692464  244383 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0601 11:07:22.465311  244383 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0601 11:07:22.465382  244383 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:07:22.465395  244383 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl label nodes minikube.k8s.io/version=v1.26.0-beta.1 minikube.k8s.io/commit=4a356b2b7b41c6be3e1e342298908c27bb98ce92 minikube.k8s.io/name=default-k8s-different-port-20220601110654-6708 minikube.k8s.io/updated_at=2022_06_01T11_07_22_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:07:22.521244  244383 ops.go:34] apiserver oom_adj: -16
	I0601 11:07:22.521263  244383 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:07:23.109047  244383 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:07:23.609743  244383 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:07:24.109036  244383 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:07:24.609779  244383 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:07:24.167646  232046 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:07:26.168090  232046 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:07:25.109823  244383 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:07:25.609061  244383 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:07:26.108863  244383 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:07:26.608780  244383 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:07:27.109061  244383 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:07:27.609116  244383 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:07:28.109699  244383 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:07:28.609047  244383 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:07:29.109170  244383 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:07:29.608851  244383 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:07:28.667871  232046 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:07:30.668198  232046 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:07:30.109055  244383 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:07:30.608852  244383 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:07:31.109521  244383 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:07:31.609057  244383 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:07:32.108853  244383 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:07:32.609531  244383 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:07:33.108838  244383 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:07:33.608822  244383 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:07:34.108973  244383 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:07:34.609839  244383 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:07:34.671502  244383 kubeadm.go:1045] duration metric: took 12.206180961s to wait for elevateKubeSystemPrivileges.
	I0601 11:07:34.671537  244383 kubeadm.go:397] StartCluster complete in 27.847858486s
	I0601 11:07:34.671557  244383 settings.go:142] acquiring lock: {Name:mk20a847233fa50399ab0a24280bffb8d8dbd41b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0601 11:07:34.671645  244383 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig
	I0601 11:07:34.673551  244383 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig: {Name:mkb0e16236e54fbc8651999f1dd70854c53de7db Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0601 11:07:35.189278  244383 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "default-k8s-different-port-20220601110654-6708" rescaled to 1
	I0601 11:07:35.189337  244383 start.go:208] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8444 KubernetesVersion:v1.23.6 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0601 11:07:35.191451  244383 out.go:177] * Verifying Kubernetes components...
	I0601 11:07:35.189391  244383 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0601 11:07:35.189390  244383 addons.go:415] enableAddons start: toEnable=map[], additional=[]
	I0601 11:07:35.189576  244383 config.go:178] Loaded profile config "default-k8s-different-port-20220601110654-6708": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.6
	I0601 11:07:35.192926  244383 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0601 11:07:35.192990  244383 addons.go:65] Setting storage-provisioner=true in profile "default-k8s-different-port-20220601110654-6708"
	I0601 11:07:35.193023  244383 addons.go:65] Setting default-storageclass=true in profile "default-k8s-different-port-20220601110654-6708"
	I0601 11:07:35.193071  244383 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-different-port-20220601110654-6708"
	I0601 11:07:35.193025  244383 addons.go:153] Setting addon storage-provisioner=true in "default-k8s-different-port-20220601110654-6708"
	W0601 11:07:35.193134  244383 addons.go:165] addon storage-provisioner should already be in state true
	I0601 11:07:35.193178  244383 host.go:66] Checking if "default-k8s-different-port-20220601110654-6708" exists ...
	I0601 11:07:35.193498  244383 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220601110654-6708 --format={{.State.Status}}
	I0601 11:07:35.193681  244383 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220601110654-6708 --format={{.State.Status}}
	I0601 11:07:35.209430  244383 node_ready.go:35] waiting up to 6m0s for node "default-k8s-different-port-20220601110654-6708" to be "Ready" ...
	I0601 11:07:35.237918  244383 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0601 11:07:35.239410  244383 addons.go:348] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0601 11:07:35.239425  244383 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0601 11:07:35.239470  244383 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220601110654-6708
	I0601 11:07:35.255735  244383 addons.go:153] Setting addon default-storageclass=true in "default-k8s-different-port-20220601110654-6708"
	W0601 11:07:35.255765  244383 addons.go:165] addon default-storageclass should already be in state true
	I0601 11:07:35.255799  244383 host.go:66] Checking if "default-k8s-different-port-20220601110654-6708" exists ...
	I0601 11:07:35.256352  244383 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220601110654-6708 --format={{.State.Status}}
	I0601 11:07:35.277557  244383 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49417 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/default-k8s-different-port-20220601110654-6708/id_rsa Username:docker}
	I0601 11:07:35.290858  244383 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0601 11:07:35.296059  244383 addons.go:348] installing /etc/kubernetes/addons/storageclass.yaml
	I0601 11:07:35.296086  244383 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0601 11:07:35.296137  244383 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220601110654-6708
	I0601 11:07:35.338006  244383 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49417 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/default-k8s-different-port-20220601110654-6708/id_rsa Username:docker}
	I0601 11:07:35.376722  244383 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0601 11:07:35.468185  244383 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0601 11:07:35.653594  244383 start.go:806] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS
	I0601 11:07:35.783515  244383 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0601 11:07:33.167882  232046 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:07:35.168161  232046 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:07:37.168296  232046 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:07:35.784841  244383 addons.go:417] enableAddons completed in 595.455746ms
	I0601 11:07:37.216016  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:07:39.667840  232046 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:07:42.167654  232046 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:07:39.717025  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:07:42.216640  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:07:44.667876  232046 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:07:47.168006  232046 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:07:44.716894  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:07:47.216117  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:07:49.217067  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:07:49.168183  232046 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:07:51.667932  232046 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:07:51.716491  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:07:54.216277  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:07:54.167913  232046 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:07:56.167953  232046 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:07:56.216761  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:07:58.717105  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:07:58.168275  232046 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:08:00.668037  232046 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:08:01.216388  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:08:03.716389  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:08:03.167969  232046 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:08:05.667837  232046 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:08:08.168013  232046 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:08:08.670567  232046 node_ready.go:38] duration metric: took 4m0.010022239s waiting for node "embed-certs-20220601110327-6708" to be "Ready" ...
	I0601 11:08:08.673338  232046 out.go:177] 
	W0601 11:08:08.675576  232046 out.go:239] X Exiting due to GUEST_START: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: timed out waiting for the condition
	W0601 11:08:08.675599  232046 out.go:239] * 
	W0601 11:08:08.676630  232046 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0601 11:08:08.678476  232046 out.go:177] 
	I0601 11:08:05.717011  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:08:08.215942  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:08:10.216368  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:08:12.216490  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:08:14.716947  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:08:17.216379  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:08:19.216687  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:08:21.216835  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:08:23.717175  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:08:26.216167  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:08:28.216729  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:08:30.216872  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:08:32.716452  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:08:35.216938  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:08:37.716649  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:08:39.716753  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:08:42.215917  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:08:44.216056  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:08:46.216458  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:08:48.216662  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:08:50.716633  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:08:52.716937  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:08:55.216648  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:08:57.716740  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:09:00.217259  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:09:02.716121  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:09:04.716668  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:09:06.716874  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:09:08.717065  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:09:11.216427  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:09:13.716769  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:09:16.216572  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:09:18.715438  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:09:20.716744  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:09:23.216674  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:09:25.716243  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:09:27.716345  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:09:29.716770  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:09:32.217046  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:09:34.716539  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:09:36.716922  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:09:38.717062  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:09:40.717196  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:09:43.216722  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:09:45.716601  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:09:47.716677  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:09:49.718424  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:09:52.216702  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:09:54.716437  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:09:57.216473  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:09:59.216703  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:10:01.716563  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:10:04.216144  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:10:06.216284  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:10:08.716579  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:10:11.216102  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:10:13.216282  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:10:15.716437  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:10:18.216335  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:10:20.715993  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:10:22.716802  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:10:25.216481  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:10:27.216823  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:10:29.716428  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:10:31.716531  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:10:34.216325  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:10:36.216576  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:10:38.216755  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:10:40.716532  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:10:43.216563  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:10:45.716341  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:10:47.716680  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:10:50.216218  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:10:52.716283  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:10:54.716952  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:10:57.216293  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:10:59.216999  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:11:01.716144  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:11:03.716378  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:11:05.716604  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:11:08.216289  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:11:10.216683  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:11:12.716931  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:11:15.216225  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:11:17.216376  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:11:19.716558  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:11:22.216186  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:11:24.216522  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:11:26.717180  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:11:29.216092  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:11:31.216231  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:11:33.716223  244383 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:11:35.218171  244383 node_ready.go:38] duration metric: took 4m0.008704673s waiting for node "default-k8s-different-port-20220601110654-6708" to be "Ready" ...
	I0601 11:11:35.220452  244383 out.go:177] 
	W0601 11:11:35.221885  244383 out.go:239] X Exiting due to GUEST_START: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: timed out waiting for the condition
	W0601 11:11:35.221913  244383 out.go:239] * 
	W0601 11:11:35.222650  244383 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0601 11:11:35.224616  244383 out.go:177] 
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID
	783ef41102cfd       6de166512aa22       24 seconds ago      Exited              kindnet-cni               5                   65b8c60551ae4
	313035e9674ff       4c03754524064       4 minutes ago       Running             kube-proxy                0                   c6ff76a6b51bf
	f9746f111b56a       8fa62c12256df       4 minutes ago       Running             kube-apiserver            0                   9e938dc1f669a
	0b15aeee4f551       595f327f224a4       4 minutes ago       Running             kube-scheduler            0                   1fa00271568ab
	627fd5c08820c       df7b72818ad2e       4 minutes ago       Running             kube-controller-manager   0                   a871ea5dc3032
	6ce85ae821e03       25f8c7f3da61c       4 minutes ago       Running             etcd                      0                   73e15160f8342
	
	* 
	* ==> containerd <==
	* -- Logs begin at Wed 2022-06-01 11:07:03 UTC, end at Wed 2022-06-01 11:11:36 UTC. --
	Jun 01 11:08:51 default-k8s-different-port-20220601110654-6708 containerd[516]: time="2022-06-01T11:08:51.888368807Z" level=warning msg="cleaning up after shim disconnected" id=0b7a11299d21e3a9a27edebc3e56fbcc84465b111a7fc67abd0195a4b9ee3e52 namespace=k8s.io
	Jun 01 11:08:51 default-k8s-different-port-20220601110654-6708 containerd[516]: time="2022-06-01T11:08:51.888381332Z" level=info msg="cleaning up dead shim"
	Jun 01 11:08:51 default-k8s-different-port-20220601110654-6708 containerd[516]: time="2022-06-01T11:08:51.897667645Z" level=warning msg="cleanup warnings time=\"2022-06-01T11:08:51Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2291 runtime=io.containerd.runc.v2\n"
	Jun 01 11:08:52 default-k8s-different-port-20220601110654-6708 containerd[516]: time="2022-06-01T11:08:52.827528393Z" level=info msg="RemoveContainer for \"aabc556904211cf00eabdce09beaaae9685e53a40d354da210e696f6865da0e4\""
	Jun 01 11:08:52 default-k8s-different-port-20220601110654-6708 containerd[516]: time="2022-06-01T11:08:52.831648203Z" level=info msg="RemoveContainer for \"aabc556904211cf00eabdce09beaaae9685e53a40d354da210e696f6865da0e4\" returns successfully"
	Jun 01 11:09:33 default-k8s-different-port-20220601110654-6708 containerd[516]: time="2022-06-01T11:09:33.586949493Z" level=info msg="CreateContainer within sandbox \"65b8c60551ae491626460bc8b42f164144cfeb7dea5063c8082b526389027897\" for container &ContainerMetadata{Name:kindnet-cni,Attempt:4,}"
	Jun 01 11:09:33 default-k8s-different-port-20220601110654-6708 containerd[516]: time="2022-06-01T11:09:33.598962580Z" level=info msg="CreateContainer within sandbox \"65b8c60551ae491626460bc8b42f164144cfeb7dea5063c8082b526389027897\" for &ContainerMetadata{Name:kindnet-cni,Attempt:4,} returns container id \"8af1e71b79f5c05909f7b5ac1a7551901a085d87a5e5ee09971994295cb2b2c9\""
	Jun 01 11:09:33 default-k8s-different-port-20220601110654-6708 containerd[516]: time="2022-06-01T11:09:33.599422779Z" level=info msg="StartContainer for \"8af1e71b79f5c05909f7b5ac1a7551901a085d87a5e5ee09971994295cb2b2c9\""
	Jun 01 11:09:33 default-k8s-different-port-20220601110654-6708 containerd[516]: time="2022-06-01T11:09:33.674233730Z" level=info msg="StartContainer for \"8af1e71b79f5c05909f7b5ac1a7551901a085d87a5e5ee09971994295cb2b2c9\" returns successfully"
	Jun 01 11:09:43 default-k8s-different-port-20220601110654-6708 containerd[516]: time="2022-06-01T11:09:43.893280453Z" level=info msg="shim disconnected" id=8af1e71b79f5c05909f7b5ac1a7551901a085d87a5e5ee09971994295cb2b2c9
	Jun 01 11:09:43 default-k8s-different-port-20220601110654-6708 containerd[516]: time="2022-06-01T11:09:43.893342885Z" level=warning msg="cleaning up after shim disconnected" id=8af1e71b79f5c05909f7b5ac1a7551901a085d87a5e5ee09971994295cb2b2c9 namespace=k8s.io
	Jun 01 11:09:43 default-k8s-different-port-20220601110654-6708 containerd[516]: time="2022-06-01T11:09:43.893360770Z" level=info msg="cleaning up dead shim"
	Jun 01 11:09:43 default-k8s-different-port-20220601110654-6708 containerd[516]: time="2022-06-01T11:09:43.902580622Z" level=warning msg="cleanup warnings time=\"2022-06-01T11:09:43Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2369 runtime=io.containerd.runc.v2\n"
	Jun 01 11:09:43 default-k8s-different-port-20220601110654-6708 containerd[516]: time="2022-06-01T11:09:43.916716073Z" level=info msg="RemoveContainer for \"0b7a11299d21e3a9a27edebc3e56fbcc84465b111a7fc67abd0195a4b9ee3e52\""
	Jun 01 11:09:43 default-k8s-different-port-20220601110654-6708 containerd[516]: time="2022-06-01T11:09:43.920391753Z" level=info msg="RemoveContainer for \"0b7a11299d21e3a9a27edebc3e56fbcc84465b111a7fc67abd0195a4b9ee3e52\" returns successfully"
	Jun 01 11:11:11 default-k8s-different-port-20220601110654-6708 containerd[516]: time="2022-06-01T11:11:11.586633563Z" level=info msg="CreateContainer within sandbox \"65b8c60551ae491626460bc8b42f164144cfeb7dea5063c8082b526389027897\" for container &ContainerMetadata{Name:kindnet-cni,Attempt:5,}"
	Jun 01 11:11:11 default-k8s-different-port-20220601110654-6708 containerd[516]: time="2022-06-01T11:11:11.599167773Z" level=info msg="CreateContainer within sandbox \"65b8c60551ae491626460bc8b42f164144cfeb7dea5063c8082b526389027897\" for &ContainerMetadata{Name:kindnet-cni,Attempt:5,} returns container id \"783ef41102cfd7ad4ba6d335d930063b2d735fbe8ff6d9b435ff65af1cc658e2\""
	Jun 01 11:11:11 default-k8s-different-port-20220601110654-6708 containerd[516]: time="2022-06-01T11:11:11.599743627Z" level=info msg="StartContainer for \"783ef41102cfd7ad4ba6d335d930063b2d735fbe8ff6d9b435ff65af1cc658e2\""
	Jun 01 11:11:11 default-k8s-different-port-20220601110654-6708 containerd[516]: time="2022-06-01T11:11:11.757609819Z" level=info msg="StartContainer for \"783ef41102cfd7ad4ba6d335d930063b2d735fbe8ff6d9b435ff65af1cc658e2\" returns successfully"
	Jun 01 11:11:21 default-k8s-different-port-20220601110654-6708 containerd[516]: time="2022-06-01T11:11:21.984032371Z" level=info msg="shim disconnected" id=783ef41102cfd7ad4ba6d335d930063b2d735fbe8ff6d9b435ff65af1cc658e2
	Jun 01 11:11:21 default-k8s-different-port-20220601110654-6708 containerd[516]: time="2022-06-01T11:11:21.984096913Z" level=warning msg="cleaning up after shim disconnected" id=783ef41102cfd7ad4ba6d335d930063b2d735fbe8ff6d9b435ff65af1cc658e2 namespace=k8s.io
	Jun 01 11:11:21 default-k8s-different-port-20220601110654-6708 containerd[516]: time="2022-06-01T11:11:21.984110739Z" level=info msg="cleaning up dead shim"
	Jun 01 11:11:21 default-k8s-different-port-20220601110654-6708 containerd[516]: time="2022-06-01T11:11:21.993468203Z" level=warning msg="cleanup warnings time=\"2022-06-01T11:11:21Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2449 runtime=io.containerd.runc.v2\n"
	Jun 01 11:11:22 default-k8s-different-port-20220601110654-6708 containerd[516]: time="2022-06-01T11:11:22.095149153Z" level=info msg="RemoveContainer for \"8af1e71b79f5c05909f7b5ac1a7551901a085d87a5e5ee09971994295cb2b2c9\""
	Jun 01 11:11:22 default-k8s-different-port-20220601110654-6708 containerd[516]: time="2022-06-01T11:11:22.099371016Z" level=info msg="RemoveContainer for \"8af1e71b79f5c05909f7b5ac1a7551901a085d87a5e5ee09971994295cb2b2c9\" returns successfully"
	
	* 
	* ==> describe nodes <==
	* Name:               default-k8s-different-port-20220601110654-6708
	Roles:              control-plane,master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-different-port-20220601110654-6708
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=4a356b2b7b41c6be3e1e342298908c27bb98ce92
	                    minikube.k8s.io/name=default-k8s-different-port-20220601110654-6708
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2022_06_01T11_07_22_0700
	                    minikube.k8s.io/version=v1.26.0-beta.1
	                    node-role.kubernetes.io/control-plane=
	                    node-role.kubernetes.io/master=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 01 Jun 2022 11:07:18 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-different-port-20220601110654-6708
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 01 Jun 2022 11:11:31 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 01 Jun 2022 11:07:33 +0000   Wed, 01 Jun 2022 11:07:16 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 01 Jun 2022 11:07:33 +0000   Wed, 01 Jun 2022 11:07:16 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 01 Jun 2022 11:07:33 +0000   Wed, 01 Jun 2022 11:07:16 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Wed, 01 Jun 2022 11:07:33 +0000   Wed, 01 Jun 2022 11:07:16 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    default-k8s-different-port-20220601110654-6708
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304695084Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32873824Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304695084Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32873824Ki
	  pods:               110
	System Info:
	  Machine ID:                 e0d7477b601740b2a7c32c13851e505c
	  System UUID:                c3073178-0849-48bb-88da-ba72ab8c4ba0
	  Boot ID:                    6ec46557-2d76-4d83-8353-3ee04fd961c4
	  Kernel Version:             5.13.0-1027-gcp
	  OS Image:                   Ubuntu 20.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.6.4
	  Kubelet Version:            v1.23.6
	  Kube-Proxy Version:         v1.23.6
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                                                      CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                                      ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-default-k8s-different-port-20220601110654-6708                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (0%!)(MISSING)       0 (0%!)(MISSING)         4m10s
	  kube-system                 kindnet-7fspq                                                             100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      4m2s
	  kube-system                 kube-apiserver-default-k8s-different-port-20220601110654-6708             250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m10s
	  kube-system                 kube-controller-manager-default-k8s-different-port-20220601110654-6708    200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m10s
	  kube-system                 kube-proxy-slzcl                                                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m2s
	  kube-system                 kube-scheduler-default-k8s-different-port-20220601110654-6708             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m9s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%!)(MISSING)   100m (1%!)(MISSING)
	  memory             150Mi (0%!)(MISSING)  50Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From        Message
	  ----    ------                   ----   ----        -------
	  Normal  Starting                 4m1s   kube-proxy  
	  Normal  Starting                 4m10s  kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m10s  kubelet     Node default-k8s-different-port-20220601110654-6708 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m10s  kubelet     Node default-k8s-different-port-20220601110654-6708 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m10s  kubelet     Node default-k8s-different-port-20220601110654-6708 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m10s  kubelet     Updated Node Allocatable limit across pods
	
	* 
	* ==> dmesg <==
	* [Jun 1 10:57] IPv4: martian source 10.244.0.2 from 10.244.0.2, on dev bridge
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff ce 23 50 3b c9 fd 08 06
	[  +0.595787] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ce 23 50 3b c9 fd 08 06
	[  +0.586201] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 66 8a d2 ee 2a 9a 08 06
	[Jun 1 10:58] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 9a c7 83 3b 8c 1d 08 06
	[  +0.000302] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 66 8a d2 ee 2a 9a 08 06
	[ +12.725641] IPv4: martian source 10.244.0.2 from 10.244.0.2, on dev bridge
	[  +0.000013] ll header: 00000000: ff ff ff ff ff ff 3a 82 da d0 8c 8d 08 06
	[  +0.906380] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 3a 82 da d0 8c 8d 08 06
	[  +0.302806] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff aa d1 05 1a ef 66 08 06
	[ +13.606547] IPv4: martian source 10.85.0.2 from 10.85.0.2, on dev cni0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 7e ad f3 21 27 5b 08 06
	[  +8.626375] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff f6 c1 ef be 01 dd 08 06
	[  +0.000348] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff aa d1 05 1a ef 66 08 06
	[  +8.278909] IPv4: martian source 10.85.0.2 from 10.85.0.2, on dev cni0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 72 fb cd eb 09 11 08 06
	[Jun 1 11:01] IPv4: martian destination 127.0.0.11 from 10.244.0.2, dev vethb04b9442
	
	* 
	* ==> etcd [6ce85ae821e03fdd8bd07541eda8ea822ee62a21dc41c68bec58c5695d43fb44] <==
	* {"level":"info","ts":"2022-06-01T11:07:15.485Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc switched to configuration voters=(12593026477526642892)"}
	{"level":"info","ts":"2022-06-01T11:07:15.485Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","added-peer-id":"aec36adc501070cc","added-peer-peer-urls":["https://192.168.49.2:2380"]}
	{"level":"info","ts":"2022-06-01T11:07:15.488Z","caller":"embed/etcd.go:687","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2022-06-01T11:07:15.488Z","caller":"embed/etcd.go:580","msg":"serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2022-06-01T11:07:15.488Z","caller":"embed/etcd.go:552","msg":"cmux::serve","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2022-06-01T11:07:15.488Z","caller":"embed/etcd.go:276","msg":"now serving peer/client/metrics","local-member-id":"aec36adc501070cc","initial-advertise-peer-urls":["https://192.168.49.2:2380"],"listen-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.49.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2022-06-01T11:07:15.488Z","caller":"embed/etcd.go:762","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2022-06-01T11:07:16.174Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 1"}
	{"level":"info","ts":"2022-06-01T11:07:16.175Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 1"}
	{"level":"info","ts":"2022-06-01T11:07:16.175Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 1"}
	{"level":"info","ts":"2022-06-01T11:07:16.175Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 2"}
	{"level":"info","ts":"2022-06-01T11:07:16.175Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2022-06-01T11:07:16.175Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 2"}
	{"level":"info","ts":"2022-06-01T11:07:16.175Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"}
	{"level":"info","ts":"2022-06-01T11:07:16.175Z","caller":"etcdserver/server.go:2476","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2022-06-01T11:07:16.176Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2022-06-01T11:07:16.176Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2022-06-01T11:07:16.176Z","caller":"etcdserver/server.go:2500","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2022-06-01T11:07:16.176Z","caller":"etcdserver/server.go:2027","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:default-k8s-different-port-20220601110654-6708 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2022-06-01T11:07:16.176Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-06-01T11:07:16.176Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-06-01T11:07:16.177Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2022-06-01T11:07:16.178Z","caller":"etcdmain/main.go:47","msg":"notifying init daemon"}
	{"level":"info","ts":"2022-06-01T11:07:16.178Z","caller":"etcdmain/main.go:53","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2022-06-01T11:07:16.177Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.49.2:2379"}
	
	* 
	* ==> kernel <==
	*  11:11:36 up 54 min,  0 users,  load average: 0.59, 1.18, 1.61
	Linux default-k8s-different-port-20220601110654-6708 5.13.0-1027-gcp #32~20.04.1-Ubuntu SMP Thu May 26 10:53:08 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.4 LTS"
	
	* 
	* ==> kube-apiserver [f9746f111b56ac0244039e7b095ebf29999ccb20c1bf5e1ba484273bdb9e0d90] <==
	* I0601 11:07:18.453282       1 shared_informer.go:247] Caches are synced for crd-autoregister 
	I0601 11:07:18.453289       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0601 11:07:18.453299       1 cache.go:39] Caches are synced for autoregister controller
	I0601 11:07:18.453427       1 shared_informer.go:247] Caches are synced for cluster_authentication_trust_controller 
	I0601 11:07:18.454162       1 apf_controller.go:322] Running API Priority and Fairness config worker
	I0601 11:07:18.464230       1 controller.go:611] quota admission added evaluator for: namespaces
	I0601 11:07:19.313010       1 controller.go:132] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I0601 11:07:19.313033       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0601 11:07:19.318632       1 storage_scheduling.go:93] created PriorityClass system-node-critical with value 2000001000
	I0601 11:07:19.321753       1 storage_scheduling.go:93] created PriorityClass system-cluster-critical with value 2000000000
	I0601 11:07:19.321788       1 storage_scheduling.go:109] all system priority classes are created successfully or already exist.
	I0601 11:07:19.672421       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0601 11:07:19.701304       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0601 11:07:19.786756       1 alloc.go:329] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1]
	W0601 11:07:19.792151       1 lease.go:233] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I0601 11:07:19.793209       1 controller.go:611] quota admission added evaluator for: endpoints
	I0601 11:07:19.796644       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0601 11:07:20.164772       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0601 11:07:20.480504       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0601 11:07:21.468664       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0601 11:07:21.475420       1 alloc.go:329] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I0601 11:07:21.484951       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0601 11:07:33.885430       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	I0601 11:07:34.285929       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	I0601 11:07:34.903429       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	
	* 
	* ==> kube-controller-manager [627fd5c08820c1666f69a50ff3cc02a6eee73709048b46a0243143bf89dde787] <==
	* I0601 11:07:33.334022       1 node_lifecycle_controller.go:1163] Controller detected that all Nodes are not-Ready. Entering master disruption mode.
	I0601 11:07:33.334139       1 event.go:294] "Event occurred" object="default-k8s-different-port-20220601110654-6708" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node default-k8s-different-port-20220601110654-6708 event: Registered Node default-k8s-different-port-20220601110654-6708 in Controller"
	I0601 11:07:33.340437       1 shared_informer.go:247] Caches are synced for certificate-csrsigning-kubelet-client 
	I0601 11:07:33.340465       1 shared_informer.go:247] Caches are synced for certificate-csrsigning-kubelet-serving 
	I0601 11:07:33.342708       1 shared_informer.go:247] Caches are synced for namespace 
	I0601 11:07:33.370084       1 shared_informer.go:247] Caches are synced for service account 
	I0601 11:07:33.410234       1 shared_informer.go:247] Caches are synced for expand 
	I0601 11:07:33.416497       1 shared_informer.go:247] Caches are synced for ephemeral 
	I0601 11:07:33.464111       1 shared_informer.go:247] Caches are synced for persistent volume 
	I0601 11:07:33.474301       1 shared_informer.go:247] Caches are synced for PVC protection 
	I0601 11:07:33.482924       1 shared_informer.go:247] Caches are synced for daemon sets 
	I0601 11:07:33.484099       1 shared_informer.go:247] Caches are synced for attach detach 
	I0601 11:07:33.522810       1 shared_informer.go:247] Caches are synced for stateful set 
	I0601 11:07:33.526980       1 shared_informer.go:247] Caches are synced for resource quota 
	I0601 11:07:33.535611       1 shared_informer.go:247] Caches are synced for resource quota 
	I0601 11:07:33.887240       1 event.go:294] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-64897985d to 2"
	I0601 11:07:33.937891       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0601 11:07:33.937920       1 garbagecollector.go:155] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0601 11:07:33.958070       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0601 11:07:34.291990       1 event.go:294] "Event occurred" object="kube-system/kindnet" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-7fspq"
	I0601 11:07:34.293024       1 event.go:294] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-slzcl"
	I0601 11:07:34.337886       1 event.go:294] "Event occurred" object="kube-system/coredns-64897985d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-64897985d-zbtdx"
	I0601 11:07:34.342039       1 event.go:294] "Event occurred" object="kube-system/coredns-64897985d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-64897985d-9gcj2"
	I0601 11:07:34.693996       1 event.go:294] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-64897985d to 1"
	I0601 11:07:34.702363       1 event.go:294] "Event occurred" object="kube-system/coredns-64897985d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-64897985d-zbtdx"
	
	* 
	* ==> kube-proxy [313035e9674ffbf03bc7f81b4786fb43b0ffff7b5720ab0e88a6bb8f52d6087d] <==
	* I0601 11:07:34.878114       1 node.go:163] Successfully retrieved node IP: 192.168.49.2
	I0601 11:07:34.878163       1 server_others.go:138] "Detected node IP" address="192.168.49.2"
	I0601 11:07:34.878197       1 server_others.go:561] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0601 11:07:34.900526       1 server_others.go:206] "Using iptables Proxier"
	I0601 11:07:34.900564       1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0601 11:07:34.900573       1 server_others.go:214] "Creating dualStackProxier for iptables"
	I0601 11:07:34.900595       1 server_others.go:491] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
	I0601 11:07:34.900961       1 server.go:656] "Version info" version="v1.23.6"
	I0601 11:07:34.901514       1 config.go:226] "Starting endpoint slice config controller"
	I0601 11:07:34.901535       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0601 11:07:34.901567       1 config.go:317] "Starting service config controller"
	I0601 11:07:34.901573       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0601 11:07:35.002527       1 shared_informer.go:247] Caches are synced for service config 
	I0601 11:07:35.002535       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	
	* 
	* ==> kube-scheduler [0b15aeee4f5511a740a4f3f9ebbba9758b6b511b072da616c68a21c118c7790e] <==
	* W0601 11:07:18.472752       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0601 11:07:18.472806       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0601 11:07:18.472922       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0601 11:07:18.473037       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0601 11:07:18.473083       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0601 11:07:18.473043       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0601 11:07:18.472942       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0601 11:07:18.473159       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0601 11:07:18.473644       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0601 11:07:18.473712       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0601 11:07:18.473647       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0601 11:07:18.473764       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0601 11:07:18.475610       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0601 11:07:18.475814       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0601 11:07:19.293620       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0601 11:07:19.293655       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0601 11:07:19.295513       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0601 11:07:19.295539       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0601 11:07:19.320706       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0601 11:07:19.320741       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0601 11:07:19.376036       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0601 11:07:19.376074       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0601 11:07:19.399236       1 reflector.go:324] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0601 11:07:19.399272       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0601 11:07:22.265287       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Wed 2022-06-01 11:07:03 UTC, end at Wed 2022-06-01 11:11:36 UTC. --
	Jun 01 11:10:31 default-k8s-different-port-20220601110654-6708 kubelet[1317]: E0601 11:10:31.844413    1317 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jun 01 11:10:35 default-k8s-different-port-20220601110654-6708 kubelet[1317]: I0601 11:10:35.584770    1317 scope.go:110] "RemoveContainer" containerID="8af1e71b79f5c05909f7b5ac1a7551901a085d87a5e5ee09971994295cb2b2c9"
	Jun 01 11:10:35 default-k8s-different-port-20220601110654-6708 kubelet[1317]: E0601 11:10:35.585047    1317 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kindnet-cni pod=kindnet-7fspq_kube-system(eefcd8e6-51e4-4d48-a420-93f4b47cf732)\"" pod="kube-system/kindnet-7fspq" podUID=eefcd8e6-51e4-4d48-a420-93f4b47cf732
	Jun 01 11:10:36 default-k8s-different-port-20220601110654-6708 kubelet[1317]: E0601 11:10:36.845143    1317 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jun 01 11:10:41 default-k8s-different-port-20220601110654-6708 kubelet[1317]: E0601 11:10:41.846601    1317 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jun 01 11:10:46 default-k8s-different-port-20220601110654-6708 kubelet[1317]: I0601 11:10:46.584685    1317 scope.go:110] "RemoveContainer" containerID="8af1e71b79f5c05909f7b5ac1a7551901a085d87a5e5ee09971994295cb2b2c9"
	Jun 01 11:10:46 default-k8s-different-port-20220601110654-6708 kubelet[1317]: E0601 11:10:46.584996    1317 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kindnet-cni pod=kindnet-7fspq_kube-system(eefcd8e6-51e4-4d48-a420-93f4b47cf732)\"" pod="kube-system/kindnet-7fspq" podUID=eefcd8e6-51e4-4d48-a420-93f4b47cf732
	Jun 01 11:10:46 default-k8s-different-port-20220601110654-6708 kubelet[1317]: E0601 11:10:46.847907    1317 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jun 01 11:10:51 default-k8s-different-port-20220601110654-6708 kubelet[1317]: E0601 11:10:51.849166    1317 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jun 01 11:10:56 default-k8s-different-port-20220601110654-6708 kubelet[1317]: E0601 11:10:56.850696    1317 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jun 01 11:11:00 default-k8s-different-port-20220601110654-6708 kubelet[1317]: I0601 11:11:00.584757    1317 scope.go:110] "RemoveContainer" containerID="8af1e71b79f5c05909f7b5ac1a7551901a085d87a5e5ee09971994295cb2b2c9"
	Jun 01 11:11:00 default-k8s-different-port-20220601110654-6708 kubelet[1317]: E0601 11:11:00.585030    1317 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kindnet-cni pod=kindnet-7fspq_kube-system(eefcd8e6-51e4-4d48-a420-93f4b47cf732)\"" pod="kube-system/kindnet-7fspq" podUID=eefcd8e6-51e4-4d48-a420-93f4b47cf732
	Jun 01 11:11:01 default-k8s-different-port-20220601110654-6708 kubelet[1317]: E0601 11:11:01.852425    1317 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jun 01 11:11:06 default-k8s-different-port-20220601110654-6708 kubelet[1317]: E0601 11:11:06.853738    1317 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jun 01 11:11:11 default-k8s-different-port-20220601110654-6708 kubelet[1317]: I0601 11:11:11.584310    1317 scope.go:110] "RemoveContainer" containerID="8af1e71b79f5c05909f7b5ac1a7551901a085d87a5e5ee09971994295cb2b2c9"
	Jun 01 11:11:11 default-k8s-different-port-20220601110654-6708 kubelet[1317]: E0601 11:11:11.854783    1317 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jun 01 11:11:16 default-k8s-different-port-20220601110654-6708 kubelet[1317]: E0601 11:11:16.855858    1317 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jun 01 11:11:21 default-k8s-different-port-20220601110654-6708 kubelet[1317]: E0601 11:11:21.857225    1317 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jun 01 11:11:22 default-k8s-different-port-20220601110654-6708 kubelet[1317]: I0601 11:11:22.093985    1317 scope.go:110] "RemoveContainer" containerID="8af1e71b79f5c05909f7b5ac1a7551901a085d87a5e5ee09971994295cb2b2c9"
	Jun 01 11:11:22 default-k8s-different-port-20220601110654-6708 kubelet[1317]: I0601 11:11:22.094323    1317 scope.go:110] "RemoveContainer" containerID="783ef41102cfd7ad4ba6d335d930063b2d735fbe8ff6d9b435ff65af1cc658e2"
	Jun 01 11:11:22 default-k8s-different-port-20220601110654-6708 kubelet[1317]: E0601 11:11:22.094623    1317 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kindnet-cni pod=kindnet-7fspq_kube-system(eefcd8e6-51e4-4d48-a420-93f4b47cf732)\"" pod="kube-system/kindnet-7fspq" podUID=eefcd8e6-51e4-4d48-a420-93f4b47cf732
	Jun 01 11:11:26 default-k8s-different-port-20220601110654-6708 kubelet[1317]: E0601 11:11:26.858028    1317 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jun 01 11:11:31 default-k8s-different-port-20220601110654-6708 kubelet[1317]: E0601 11:11:31.858896    1317 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jun 01 11:11:33 default-k8s-different-port-20220601110654-6708 kubelet[1317]: I0601 11:11:33.584173    1317 scope.go:110] "RemoveContainer" containerID="783ef41102cfd7ad4ba6d335d930063b2d735fbe8ff6d9b435ff65af1cc658e2"
	Jun 01 11:11:33 default-k8s-different-port-20220601110654-6708 kubelet[1317]: E0601 11:11:33.584520    1317 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kindnet-cni pod=kindnet-7fspq_kube-system(eefcd8e6-51e4-4d48-a420-93f4b47cf732)\"" pod="kube-system/kindnet-7fspq" podUID=eefcd8e6-51e4-4d48-a420-93f4b47cf732
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-different-port-20220601110654-6708 -n default-k8s-different-port-20220601110654-6708
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-different-port-20220601110654-6708 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:270: non-running pods: coredns-64897985d-9gcj2 storage-provisioner
helpers_test.go:272: ======> post-mortem[TestStartStop/group/default-k8s-different-port/serial/FirstStart]: describe non-running pods <======
helpers_test.go:275: (dbg) Run:  kubectl --context default-k8s-different-port-20220601110654-6708 describe pod coredns-64897985d-9gcj2 storage-provisioner
helpers_test.go:275: (dbg) Non-zero exit: kubectl --context default-k8s-different-port-20220601110654-6708 describe pod coredns-64897985d-9gcj2 storage-provisioner: exit status 1 (51.11268ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-64897985d-9gcj2" not found
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:277: kubectl --context default-k8s-different-port-20220601110654-6708 describe pod coredns-64897985d-9gcj2 storage-provisioner: exit status 1
--- FAIL: TestStartStop/group/default-k8s-different-port/serial/FirstStart (282.63s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (484.46s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:198: (dbg) Run:  kubectl --context embed-certs-20220601110327-6708 create -f testdata/busybox.yaml
start_stop_delete_test.go:198: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:342: "busybox" [83453f2f-82f9-4123-940c-6ec0b1d16d40] Pending
helpers_test.go:342: "busybox" [83453f2f-82f9-4123-940c-6ec0b1d16d40] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
E0601 11:08:22.337911    6708 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/enable-default-cni-20220601104837-6708/client.crt: no such file or directory
E0601 11:08:34.904352    6708 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/bridge-20220601104837-6708/client.crt: no such file or directory
E0601 11:09:02.589065    6708 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/bridge-20220601104837-6708/client.crt: no such file or directory
E0601 11:09:22.034611    6708 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/cilium-20220601104839-6708/client.crt: no such file or directory
E0601 11:09:31.087043    6708 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/ingress-addon-legacy-20220601102806-6708/client.crt: no such file or directory
E0601 11:09:49.719634    6708 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/cilium-20220601104839-6708/client.crt: no such file or directory
E0601 11:10:40.379997    6708 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/no-preload-20220601105939-6708/client.crt: no such file or directory
E0601 11:10:40.385219    6708 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/no-preload-20220601105939-6708/client.crt: no such file or directory
E0601 11:10:40.395446    6708 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/no-preload-20220601105939-6708/client.crt: no such file or directory
E0601 11:10:40.415692    6708 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/no-preload-20220601105939-6708/client.crt: no such file or directory
E0601 11:10:40.455938    6708 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/no-preload-20220601105939-6708/client.crt: no such file or directory
E0601 11:10:40.536218    6708 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/no-preload-20220601105939-6708/client.crt: no such file or directory
E0601 11:10:40.696586    6708 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/no-preload-20220601105939-6708/client.crt: no such file or directory
E0601 11:10:41.017422    6708 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/no-preload-20220601105939-6708/client.crt: no such file or directory
E0601 11:10:41.657873    6708 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/no-preload-20220601105939-6708/client.crt: no such file or directory
E0601 11:10:42.939051    6708 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/no-preload-20220601105939-6708/client.crt: no such file or directory
E0601 11:10:45.499702    6708 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/no-preload-20220601105939-6708/client.crt: no such file or directory
E0601 11:10:50.619930    6708 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/no-preload-20220601105939-6708/client.crt: no such file or directory
E0601 11:11:00.860406    6708 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/no-preload-20220601105939-6708/client.crt: no such file or directory
E0601 11:11:21.341003    6708 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/no-preload-20220601105939-6708/client.crt: no such file or directory
E0601 11:11:21.551343    6708 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/kindnet-20220601104838-6708/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:198: ***** TestStartStop/group/embed-certs/serial/DeployApp: pod "integration-test=busybox" failed to start within 8m0s: timed out waiting for the condition ****
start_stop_delete_test.go:198: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-20220601110327-6708 -n embed-certs-20220601110327-6708
start_stop_delete_test.go:198: TestStartStop/group/embed-certs/serial/DeployApp: showing logs for failed pods as of 2022-06-01 11:16:11.124154212 +0000 UTC m=+3372.123774455
start_stop_delete_test.go:198: (dbg) Run:  kubectl --context embed-certs-20220601110327-6708 describe po busybox -n default
start_stop_delete_test.go:198: (dbg) kubectl --context embed-certs-20220601110327-6708 describe po busybox -n default:
Name:         busybox
Namespace:    default
Priority:     0
Node:         <none>
Labels:       integration-test=busybox
Annotations:  <none>
Status:       Pending
IP:           
IPs:          <none>
Containers:
busybox:
Image:      gcr.io/k8s-minikube/busybox:1.28.4-glibc
Port:       <none>
Host Port:  <none>
Command:
sleep
3600
Environment:  <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-wgcrb (ro)
Conditions:
Type           Status
PodScheduled   False 
Volumes:
kube-api-access-wgcrb:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
ConfigMapOptional:       <nil>
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason            Age                 From               Message
----     ------            ----                ----               -------
Warning  FailedScheduling  45s (x8 over 8m1s)  default-scheduler  0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.
start_stop_delete_test.go:198: (dbg) Run:  kubectl --context embed-certs-20220601110327-6708 logs busybox -n default
start_stop_delete_test.go:198: (dbg) kubectl --context embed-certs-20220601110327-6708 logs busybox -n default:
start_stop_delete_test.go:198: wait: integration-test=busybox within 8m0s: timed out waiting for the condition
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/embed-certs/serial/DeployApp]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect embed-certs-20220601110327-6708
helpers_test.go:235: (dbg) docker inspect embed-certs-20220601110327-6708:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "b77a5d5e61bfa6e31aa165e07cef7da486c7219a464787e9c662bc91861f785d",
	        "Created": "2022-06-01T11:03:36.104826313Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 232853,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-06-01T11:03:36.476018297Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:5fc9565d342f677dd8987c0c7656d8d58147ab45c932c9076935b38a770e4cf7",
	        "ResolvConfPath": "/var/lib/docker/containers/b77a5d5e61bfa6e31aa165e07cef7da486c7219a464787e9c662bc91861f785d/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/b77a5d5e61bfa6e31aa165e07cef7da486c7219a464787e9c662bc91861f785d/hostname",
	        "HostsPath": "/var/lib/docker/containers/b77a5d5e61bfa6e31aa165e07cef7da486c7219a464787e9c662bc91861f785d/hosts",
	        "LogPath": "/var/lib/docker/containers/b77a5d5e61bfa6e31aa165e07cef7da486c7219a464787e9c662bc91861f785d/b77a5d5e61bfa6e31aa165e07cef7da486c7219a464787e9c662bc91861f785d-json.log",
	        "Name": "/embed-certs-20220601110327-6708",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-20220601110327-6708:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "embed-certs-20220601110327-6708",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/af793a78707e49005a668827ee182b67b74ca83491cbfb43256e792a6be931d8-init/diff:/var/lib/docker/overlay2/b8d8db1a2aa7e68bc30cc8784920412f67de75d287f26f41df14fe77e9cd01aa/diff:/var/lib/docker/overlay2/e05914bc392f34941f075ddf1f08984ba653e41980a3a12b3f0ec7bc66faed2c/diff:/var/lib/docker/overlay2/311290be1e68cfaf248422398f2ee037b662e83e8a80c393cf1f35af46c871e3/diff:/var/lib/docker/overlay2/df5bd86ed0e0c9740e1a389f0dd435837b730043ff123ee198aa28f82511f21c/diff:/var/lib/docker/overlay2/52ebf0baed5a1f4378d84fd6b94c9c1756fef7c7837d0766f77ef29b2104f39c/diff:/var/lib/docker/overlay2/19fdc64f1948c51309fb85761a84e7787a06f91f43e3d554c0507c5291bda802/diff:/var/lib/docker/overlay2/8fbe94a5f5ec7e7b87e3294d917aaba04c0d5a1072a4876ca3171f7b416e78d1/diff:/var/lib/docker/overlay2/3f2aec9d3b6e64c9251bd84e36639908a944a57741768984d5a66f5f34ed2288/diff:/var/lib/docker/overlay2/f4b59ae0a52d88a0325a281689c43f769408e14f70b9c94b771e70af140d7538/diff:/var/lib/docker/overlay2/1b9610
0ef86fc38fba7ce796c89f6a0f0c16c7fc1a94ff1a7f2021a01dd5471e/diff:/var/lib/docker/overlay2/10388fac28961ac0e67628b98602c8c77b04d12b21cd25a8d9adc05c1261252b/diff:/var/lib/docker/overlay2/efcc44ba0e0e6acd23e74db49106211785c2519f22b858252b43b17e927095e4/diff:/var/lib/docker/overlay2/3dbc9b9ec4e689c631fadb88ec67ad1f44f1059642f0d9e9740fa4523681476a/diff:/var/lib/docker/overlay2/196519dbdfaceb51fe77bd0517b4a726c63133e2f1a44cc71539d859f920996f/diff:/var/lib/docker/overlay2/bf269014ddb2ded291bc1f7a299a168567e8ffb016d7d4ba5ad3681eb1c988ba/diff:/var/lib/docker/overlay2/dc3a1c86dd14e2cba77aa4a9424d61aa34efc817c2c14b8f79d66fb1c19132ea/diff:/var/lib/docker/overlay2/7151cfa81a30a8e2bacf0e7e97eb21e825844e9ee99d4d774c443cf4df3042bf/diff:/var/lib/docker/overlay2/4827c7b9079e215b353e51dd539d7e28bf73341ea0a494df4654e9fd1b53d16c/diff:/var/lib/docker/overlay2/2da0eda04306aaeacd19a5cc85b885230b88e74f1791bdb022ebf4b39d85fae2/diff:/var/lib/docker/overlay2/1a0bdf8971fb0a44ff0e168623dbbb2d84b8e93f6d20d9351efd68470e4d4851/diff:/var/lib/d
ocker/overlay2/9ced6f582fc4ce00064f1f5e6ac922d838648fe94afc8e015c309e04004f10ca/diff:/var/lib/docker/overlay2/dd6d4c3166eb565aff2516953d5a8877a204214f2436d414475132ae70429cf7/diff:/var/lib/docker/overlay2/a1ace060e85891d54b26ff4be9fcbce36ffbb15cc4061eb4ccf0add8f82783df/diff:/var/lib/docker/overlay2/bc8b93bfba93e7da2c573ae8b6405ebff526153f6a8b0659aebaf044dc7e8f43/diff:/var/lib/docker/overlay2/c6292624b658b5761ddc277e4b50f1bd9d32fb9a2ad4a01647d6482fa0d29eb3/diff:/var/lib/docker/overlay2/cfe8f35eeb0a80a3c747eac0dfd9195da1fa3d9f92f0b7866d46f3517f3b10ee/diff:/var/lib/docker/overlay2/90bc3b9378b5761de7575f1a82d48e4b4ebf50af153eafa5a565585e136b87f8/diff:/var/lib/docker/overlay2/e1b928a2483870df7fdf4adb3b4002f9effe1db7fbff925b24005f47290b2915/diff:/var/lib/docker/overlay2/4758c75ab63fd3f43ae7b654bc93566ded69acea0e92caf3043ef6eeeec9ca1b/diff:/var/lib/docker/overlay2/8558da5809877030d87982052e5011fb04b8bb9646d0f3c1d4aa10f2d7926592/diff:/var/lib/docker/overlay2/f6400cfae811f55736f55064c8949e18ac2dc1175a5149bb0382a79fa92
4f3f3/diff:/var/lib/docker/overlay2/875d5278ff8445b84261d4586c6a14fbbd9c13beff1fe9252591162e4d91a0dc/diff:/var/lib/docker/overlay2/12d9f85a229e1386e37fb609020fdcb5535c67ce811012fd646182559e4ee754/diff:/var/lib/docker/overlay2/d5c5b85272d7a8b0b62da488c8cba8d883a0adcfc9f1b2f6ad2f856b4f13e5f7/diff:/var/lib/docker/overlay2/5f6feb9e059c22491e287804f38f097fda984dc9376ba28ae810e13dcf27394f/diff:/var/lib/docker/overlay2/113a715b1135f09b959295e960aeaa36846ad54a6fe46fdd53d061bc3fe114e3/diff:/var/lib/docker/overlay2/265096a57a98130b8073aa41d0a22f0ada5a391943e490ac1c281a634e22cba0/diff:/var/lib/docker/overlay2/15f57e6dc9a6b382240a9335aae067454048b2eb6d0094f2d3c8c115be34311a/diff:/var/lib/docker/overlay2/45ca8d44d46a41b4493f52c0048d08b8d5ff4c1b28233ab8940f6422df1f1273/diff:/var/lib/docker/overlay2/d555005a13c251ef928389a1b06d251a17378cf3ec68af5a4d6c849412d3f69f/diff:/var/lib/docker/overlay2/80b8c06850dacfe2ca4db4e0fa4d2d0dd6997cf495a7f7da9b95a996694144c5/diff",
	                "MergedDir": "/var/lib/docker/overlay2/af793a78707e49005a668827ee182b67b74ca83491cbfb43256e792a6be931d8/merged",
	                "UpperDir": "/var/lib/docker/overlay2/af793a78707e49005a668827ee182b67b74ca83491cbfb43256e792a6be931d8/diff",
	                "WorkDir": "/var/lib/docker/overlay2/af793a78707e49005a668827ee182b67b74ca83491cbfb43256e792a6be931d8/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-20220601110327-6708",
	                "Source": "/var/lib/docker/volumes/embed-certs-20220601110327-6708/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-20220601110327-6708",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-20220601110327-6708",
	                "name.minikube.sigs.k8s.io": "embed-certs-20220601110327-6708",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "6e07617b2a6be7f1d7fcd4f72c38164dc41010e13179d5f3d71f30078705fa21",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49412"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49411"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49408"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49410"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49409"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/6e07617b2a6b",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-20220601110327-6708": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "b77a5d5e61bf",
	                        "embed-certs-20220601110327-6708"
	                    ],
	                    "NetworkID": "85c31b5e416e869b4ae1612c11e4fd39718a187a5009c211794c61313cb0c682",
	                    "EndpointID": "8df55589072b1e0d65a42a89f9b0e4d5153d5de972481a98d522d287ef34389c",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-20220601110327-6708 -n embed-certs-20220601110327-6708
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/DeployApp FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/DeployApp]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-20220601110327-6708 logs -n 25
helpers_test.go:252: TestStartStop/group/embed-certs/serial/DeployApp logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------------------------------|------------------------------------------------|---------|----------------|---------------------|---------------------|
	| Command |                            Args                            |                    Profile                     |  User   |    Version     |     Start Time      |      End Time       |
	|---------|------------------------------------------------------------|------------------------------------------------|---------|----------------|---------------------|---------------------|
	| ssh     | -p                                                         | no-preload-20220601105939-6708                 | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:06 UTC | 01 Jun 22 11:06 UTC |
	|         | no-preload-20220601105939-6708                             |                                                |         |                |                     |                     |
	|         | sudo crictl images -o json                                 |                                                |         |                |                     |                     |
	| pause   | -p                                                         | no-preload-20220601105939-6708                 | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:06 UTC | 01 Jun 22 11:06 UTC |
	|         | no-preload-20220601105939-6708                             |                                                |         |                |                     |                     |
	|         | --alsologtostderr -v=1                                     |                                                |         |                |                     |                     |
	| unpause | -p                                                         | no-preload-20220601105939-6708                 | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:06 UTC | 01 Jun 22 11:06 UTC |
	|         | no-preload-20220601105939-6708                             |                                                |         |                |                     |                     |
	|         | --alsologtostderr -v=1                                     |                                                |         |                |                     |                     |
	| delete  | -p                                                         | no-preload-20220601105939-6708                 | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:06 UTC | 01 Jun 22 11:06 UTC |
	|         | no-preload-20220601105939-6708                             |                                                |         |                |                     |                     |
	| delete  | -p                                                         | no-preload-20220601105939-6708                 | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:06 UTC | 01 Jun 22 11:06 UTC |
	|         | no-preload-20220601105939-6708                             |                                                |         |                |                     |                     |
	| delete  | -p                                                         | disable-driver-mounts-20220601110654-6708      | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:06 UTC | 01 Jun 22 11:06 UTC |
	|         | disable-driver-mounts-20220601110654-6708                  |                                                |         |                |                     |                     |
	| logs    | embed-certs-20220601110327-6708                            | embed-certs-20220601110327-6708                | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:08 UTC | 01 Jun 22 11:08 UTC |
	|         | logs -n 25                                                 |                                                |         |                |                     |                     |
	| logs    | default-k8s-different-port-20220601110654-6708             | default-k8s-different-port-20220601110654-6708 | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:11 UTC | 01 Jun 22 11:11 UTC |
	|         | logs -n 25                                                 |                                                |         |                |                     |                     |
	| logs    | old-k8s-version-20220601105850-6708                        | old-k8s-version-20220601105850-6708            | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:11 UTC | 01 Jun 22 11:11 UTC |
	|         | logs -n 25                                                 |                                                |         |                |                     |                     |
	| logs    | old-k8s-version-20220601105850-6708                        | old-k8s-version-20220601105850-6708            | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:11 UTC | 01 Jun 22 11:11 UTC |
	|         | logs -n 25                                                 |                                                |         |                |                     |                     |
	| addons  | enable metrics-server -p                                   | old-k8s-version-20220601105850-6708            | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:11 UTC | 01 Jun 22 11:11 UTC |
	|         | old-k8s-version-20220601105850-6708                        |                                                |         |                |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                                                |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                     |                                                |         |                |                     |                     |
	| stop    | -p                                                         | old-k8s-version-20220601105850-6708            | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:11 UTC | 01 Jun 22 11:11 UTC |
	|         | old-k8s-version-20220601105850-6708                        |                                                |         |                |                     |                     |
	|         | --alsologtostderr -v=3                                     |                                                |         |                |                     |                     |
	| addons  | enable dashboard -p                                        | old-k8s-version-20220601105850-6708            | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:11 UTC | 01 Jun 22 11:11 UTC |
	|         | old-k8s-version-20220601105850-6708                        |                                                |         |                |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                |         |                |                     |                     |
	| logs    | calico-20220601104839-6708                                 | calico-20220601104839-6708                     | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:14 UTC | 01 Jun 22 11:14 UTC |
	|         | logs -n 25                                                 |                                                |         |                |                     |                     |
	| delete  | -p calico-20220601104839-6708                              | calico-20220601104839-6708                     | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:14 UTC | 01 Jun 22 11:14 UTC |
	| start   | -p newest-cni-20220601111420-6708 --memory=2200            | newest-cni-20220601111420-6708                 | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:14 UTC | 01 Jun 22 11:15 UTC |
	|         | --alsologtostderr --wait=apiserver,system_pods,default_sa  |                                                |         |                |                     |                     |
	|         | --feature-gates ServerSideApply=true --network-plugin=cni  |                                                |         |                |                     |                     |
	|         | --extra-config=kubelet.network-plugin=cni                  |                                                |         |                |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |                                                |         |                |                     |                     |
	|         | --driver=docker  --container-runtime=containerd            |                                                |         |                |                     |                     |
	|         | --kubernetes-version=v1.23.6                               |                                                |         |                |                     |                     |
	| addons  | enable metrics-server -p                                   | newest-cni-20220601111420-6708                 | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:15 UTC | 01 Jun 22 11:15 UTC |
	|         | newest-cni-20220601111420-6708                             |                                                |         |                |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                                                |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                     |                                                |         |                |                     |                     |
	| stop    | -p                                                         | newest-cni-20220601111420-6708                 | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:15 UTC | 01 Jun 22 11:15 UTC |
	|         | newest-cni-20220601111420-6708                             |                                                |         |                |                     |                     |
	|         | --alsologtostderr -v=3                                     |                                                |         |                |                     |                     |
	| addons  | enable dashboard -p                                        | newest-cni-20220601111420-6708                 | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:15 UTC | 01 Jun 22 11:15 UTC |
	|         | newest-cni-20220601111420-6708                             |                                                |         |                |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                |         |                |                     |                     |
	| start   | -p newest-cni-20220601111420-6708 --memory=2200            | newest-cni-20220601111420-6708                 | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:15 UTC | 01 Jun 22 11:15 UTC |
	|         | --alsologtostderr --wait=apiserver,system_pods,default_sa  |                                                |         |                |                     |                     |
	|         | --feature-gates ServerSideApply=true --network-plugin=cni  |                                                |         |                |                     |                     |
	|         | --extra-config=kubelet.network-plugin=cni                  |                                                |         |                |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |                                                |         |                |                     |                     |
	|         | --driver=docker  --container-runtime=containerd            |                                                |         |                |                     |                     |
	|         | --kubernetes-version=v1.23.6                               |                                                |         |                |                     |                     |
	| ssh     | -p                                                         | newest-cni-20220601111420-6708                 | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:15 UTC | 01 Jun 22 11:15 UTC |
	|         | newest-cni-20220601111420-6708                             |                                                |         |                |                     |                     |
	|         | sudo crictl images -o json                                 |                                                |         |                |                     |                     |
	| pause   | -p                                                         | newest-cni-20220601111420-6708                 | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:15 UTC | 01 Jun 22 11:15 UTC |
	|         | newest-cni-20220601111420-6708                             |                                                |         |                |                     |                     |
	|         | --alsologtostderr -v=1                                     |                                                |         |                |                     |                     |
	| unpause | -p                                                         | newest-cni-20220601111420-6708                 | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:15 UTC | 01 Jun 22 11:16 UTC |
	|         | newest-cni-20220601111420-6708                             |                                                |         |                |                     |                     |
	|         | --alsologtostderr -v=1                                     |                                                |         |                |                     |                     |
	| delete  | -p                                                         | newest-cni-20220601111420-6708                 | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:16 UTC | 01 Jun 22 11:16 UTC |
	|         | newest-cni-20220601111420-6708                             |                                                |         |                |                     |                     |
	| delete  | -p                                                         | newest-cni-20220601111420-6708                 | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:16 UTC | 01 Jun 22 11:16 UTC |
	|         | newest-cni-20220601111420-6708                             |                                                |         |                |                     |                     |
	|---------|------------------------------------------------------------|------------------------------------------------|---------|----------------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/06/01 11:15:23
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.18.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0601 11:15:23.741784  263941 out.go:296] Setting OutFile to fd 1 ...
	I0601 11:15:23.741991  263941 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0601 11:15:23.742003  263941 out.go:309] Setting ErrFile to fd 2...
	I0601 11:15:23.742008  263941 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0601 11:15:23.742123  263941 root.go:322] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/bin
	I0601 11:15:23.742399  263941 out.go:303] Setting JSON to false
	I0601 11:15:23.744026  263941 start.go:115] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":3478,"bootTime":1654078646,"procs":610,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.13.0-1027-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0601 11:15:23.744098  263941 start.go:125] virtualization: kvm guest
	I0601 11:15:23.746332  263941 out.go:177] * [newest-cni-20220601111420-6708] minikube v1.26.0-beta.1 on Ubuntu 20.04 (kvm/amd64)
	I0601 11:15:23.747824  263941 notify.go:193] Checking for updates...
	I0601 11:15:23.749331  263941 out.go:177]   - MINIKUBE_LOCATION=14079
	I0601 11:15:23.750766  263941 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0601 11:15:23.752196  263941 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig
	I0601 11:15:23.753477  263941 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube
	I0601 11:15:23.754830  263941 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0601 11:15:23.756669  263941 config.go:178] Loaded profile config "newest-cni-20220601111420-6708": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.6
	I0601 11:15:23.757075  263941 driver.go:358] Setting default libvirt URI to qemu:///system
	I0601 11:15:23.796054  263941 docker.go:137] docker version: linux-20.10.16
	I0601 11:15:23.796147  263941 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0601 11:15:23.900733  263941 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:76 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:56 OomKillDisable:true NGoroutines:49 SystemTime:2022-06-01 11:15:23.826843836 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1027-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662795776 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:20.10.16 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0601 11:15:23.900836  263941 docker.go:254] overlay module found
	I0601 11:15:23.902962  263941 out.go:177] * Using the docker driver based on existing profile
	I0601 11:15:23.904197  263941 start.go:284] selected driver: docker
	I0601 11:15:23.904212  263941 start.go:806] validating driver "docker" against &{Name:newest-cni-20220601111420-6708 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:newest-cni-20220601111420-6708 Namespace:de
fault APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:network-plugin Value:cni} {Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16} {Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.5.1@sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAd
donRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0601 11:15:23.904305  263941 start.go:817] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0601 11:15:23.905170  263941 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0601 11:15:24.006067  263941 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:76 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:56 OomKillDisable:true NGoroutines:49 SystemTime:2022-06-01 11:15:23.933847887 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1027-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662795776 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:20.10.16 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0601 11:15:24.006342  263941 start_flags.go:866] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0601 11:15:24.006363  263941 cni.go:95] Creating CNI manager for ""
	I0601 11:15:24.006371  263941 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0601 11:15:24.006390  263941 cni.go:225] auto-setting extra-config to "kubelet.cni-conf-dir=/etc/cni/net.mk"
	I0601 11:15:24.006408  263941 cni.go:230] extra-config set to "kubelet.cni-conf-dir=/etc/cni/net.mk"
	I0601 11:15:24.006417  263941 start_flags.go:306] config:
	{Name:newest-cni-20220601111420-6708 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:newest-cni-20220601111420-6708 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:clust
er.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:network-plugin Value:cni} {Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16} {Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.5.1@sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true
apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0601 11:15:24.009809  263941 out.go:177] * Starting control plane node newest-cni-20220601111420-6708 in cluster newest-cni-20220601111420-6708
	I0601 11:15:24.011231  263941 cache.go:120] Beginning downloading kic base image for docker with containerd
	I0601 11:15:24.012641  263941 out.go:177] * Pulling base image ...
	I0601 11:15:24.013976  263941 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime containerd
	I0601 11:15:24.014011  263941 preload.go:148] Found local preload: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.6-containerd-overlay2-amd64.tar.lz4
	I0601 11:15:24.014027  263941 cache.go:57] Caching tarball of preloaded images
	I0601 11:15:24.014077  263941 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a in local docker daemon
	I0601 11:15:24.014241  263941 preload.go:174] Found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.6-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0601 11:15:24.014256  263941 cache.go:60] Finished verifying existence of preloaded tar for  v1.23.6 on containerd
	I0601 11:15:24.014374  263941 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/newest-cni-20220601111420-6708/config.json ...
	I0601 11:15:24.061222  263941 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a in local docker daemon, skipping pull
	I0601 11:15:24.061244  263941 cache.go:141] gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a exists in daemon, skipping load
	I0601 11:15:24.061258  263941 cache.go:206] Successfully downloaded all kic artifacts
	I0601 11:15:24.061288  263941 start.go:352] acquiring machines lock for newest-cni-20220601111420-6708: {Name:mkca6185bbe40be078b8818f834ed4486ca40c22 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0601 11:15:24.061383  263941 start.go:356] acquired machines lock for "newest-cni-20220601111420-6708" in 68.33µs
	I0601 11:15:24.061400  263941 start.go:94] Skipping create...Using existing machine configuration
	I0601 11:15:24.061408  263941 fix.go:55] fixHost starting: 
	I0601 11:15:24.061613  263941 cli_runner.go:164] Run: docker container inspect newest-cni-20220601111420-6708 --format={{.State.Status}}
	I0601 11:15:24.093924  263941 fix.go:103] recreateIfNeeded on newest-cni-20220601111420-6708: state=Stopped err=<nil>
	W0601 11:15:24.093962  263941 fix.go:129] unexpected machine state, will restart: <nil>
	I0601 11:15:24.097310  263941 out.go:177] * Restarting existing docker container for "newest-cni-20220601111420-6708" ...
	I0601 11:15:25.356061  254820 pod_ready.go:102] pod "coredns-5644d7b6d9-5z28m" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 10:59:44 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:15:27.855425  254820 pod_ready.go:102] pod "coredns-5644d7b6d9-5z28m" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 10:59:44 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:15:24.098716  263941 cli_runner.go:164] Run: docker start newest-cni-20220601111420-6708
	I0601 11:15:24.458451  263941 cli_runner.go:164] Run: docker container inspect newest-cni-20220601111420-6708 --format={{.State.Status}}
	I0601 11:15:24.493425  263941 kic.go:416] container "newest-cni-20220601111420-6708" state is running.
	I0601 11:15:24.493755  263941 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-20220601111420-6708
	I0601 11:15:24.525979  263941 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/newest-cni-20220601111420-6708/config.json ...
	I0601 11:15:24.526198  263941 machine.go:88] provisioning docker machine ...
	I0601 11:15:24.526226  263941 ubuntu.go:169] provisioning hostname "newest-cni-20220601111420-6708"
	I0601 11:15:24.526268  263941 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220601111420-6708
	I0601 11:15:24.560036  263941 main.go:134] libmachine: Using SSH client type: native
	I0601 11:15:24.560215  263941 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7da240] 0x7dd2a0 <nil>  [] 0s} 127.0.0.1 49432 <nil> <nil>}
	I0601 11:15:24.560235  263941 main.go:134] libmachine: About to run SSH command:
	sudo hostname newest-cni-20220601111420-6708 && echo "newest-cni-20220601111420-6708" | sudo tee /etc/hostname
	I0601 11:15:24.560843  263941 main.go:134] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:60658->127.0.0.1:49432: read: connection reset by peer
	I0601 11:15:27.680196  263941 main.go:134] libmachine: SSH cmd err, output: <nil>: newest-cni-20220601111420-6708
	
	I0601 11:15:27.680269  263941 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220601111420-6708
	I0601 11:15:27.712212  263941 main.go:134] libmachine: Using SSH client type: native
	I0601 11:15:27.712395  263941 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7da240] 0x7dd2a0 <nil>  [] 0s} 127.0.0.1 49432 <nil> <nil>}
	I0601 11:15:27.712419  263941 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-20220601111420-6708' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-20220601111420-6708/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-20220601111420-6708' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0601 11:15:27.823406  263941 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0601 11:15:27.823434  263941 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/c
erts/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube}
	I0601 11:15:27.823466  263941 ubuntu.go:177] setting up certificates
	I0601 11:15:27.823475  263941 provision.go:83] configureAuth start
	I0601 11:15:27.823525  263941 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-20220601111420-6708
	I0601 11:15:27.855828  263941 provision.go:138] copyHostCerts
	I0601 11:15:27.855913  263941 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/key.pem, removing ...
	I0601 11:15:27.855927  263941 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/key.pem
	I0601 11:15:27.855998  263941 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/key.pem (1679 bytes)
	I0601 11:15:27.856131  263941 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.pem, removing ...
	I0601 11:15:27.856146  263941 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.pem
	I0601 11:15:27.856185  263941 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.pem (1082 bytes)
	I0601 11:15:27.856259  263941 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cert.pem, removing ...
	I0601 11:15:27.856273  263941 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cert.pem
	I0601 11:15:27.856305  263941 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cert.pem (1123 bytes)
	I0601 11:15:27.856384  263941 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca-key.pem org=jenkins.newest-cni-20220601111420-6708 san=[192.168.67.2 127.0.0.1 localhost 127.0.0.1 minikube newest-cni-20220601111420-6708]
	I0601 11:15:27.939200  263941 provision.go:172] copyRemoteCerts
	I0601 11:15:27.939285  263941 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0601 11:15:27.939337  263941 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220601111420-6708
	I0601 11:15:27.970870  263941 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49432 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/newest-cni-20220601111420-6708/id_rsa Username:docker}
	I0601 11:15:28.054991  263941 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0601 11:15:28.072444  263941 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/server.pem --> /etc/docker/server.pem (1265 bytes)
	I0601 11:15:28.089392  263941 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0601 11:15:28.105846  263941 provision.go:86] duration metric: configureAuth took 282.359015ms
	I0601 11:15:28.105871  263941 ubuntu.go:193] setting minikube options for container-runtime
	I0601 11:15:28.106043  263941 config.go:178] Loaded profile config "newest-cni-20220601111420-6708": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.6
	I0601 11:15:28.106055  263941 machine.go:91] provisioned docker machine in 3.579843526s
	I0601 11:15:28.106063  263941 start.go:306] post-start starting for "newest-cni-20220601111420-6708" (driver="docker")
	I0601 11:15:28.106068  263941 start.go:316] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0601 11:15:28.106107  263941 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0601 11:15:28.106147  263941 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220601111420-6708
	I0601 11:15:28.137958  263941 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49432 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/newest-cni-20220601111420-6708/id_rsa Username:docker}
	I0601 11:15:28.223355  263941 ssh_runner.go:195] Run: cat /etc/os-release
	I0601 11:15:28.226130  263941 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0601 11:15:28.226164  263941 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0601 11:15:28.226178  263941 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0601 11:15:28.226185  263941 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0601 11:15:28.226198  263941 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/addons for local assets ...
	I0601 11:15:28.226245  263941 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files for local assets ...
	I0601 11:15:28.226316  263941 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files/etc/ssl/certs/67082.pem -> 67082.pem in /etc/ssl/certs
	I0601 11:15:28.226388  263941 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0601 11:15:28.232917  263941 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files/etc/ssl/certs/67082.pem --> /etc/ssl/certs/67082.pem (1708 bytes)
	I0601 11:15:28.249498  263941 start.go:309] post-start completed in 143.423491ms
	I0601 11:15:28.249569  263941 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0601 11:15:28.249616  263941 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220601111420-6708
	I0601 11:15:28.281910  263941 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49432 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/newest-cni-20220601111420-6708/id_rsa Username:docker}
	I0601 11:15:28.364106  263941 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0601 11:15:28.367931  263941 fix.go:57] fixHost completed within 4.306519392s
	I0601 11:15:28.367950  263941 start.go:81] releasing machines lock for "newest-cni-20220601111420-6708", held for 4.306556384s
	I0601 11:15:28.368021  263941 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-20220601111420-6708
	I0601 11:15:28.400068  263941 ssh_runner.go:195] Run: systemctl --version
	I0601 11:15:28.400114  263941 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0601 11:15:28.400127  263941 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220601111420-6708
	I0601 11:15:28.400158  263941 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220601111420-6708
	I0601 11:15:28.433938  263941 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49432 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/newest-cni-20220601111420-6708/id_rsa Username:docker}
	I0601 11:15:28.435630  263941 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49432 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/newest-cni-20220601111420-6708/id_rsa Username:docker}
	I0601 11:15:28.540419  263941 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0601 11:15:28.551438  263941 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0601 11:15:28.560441  263941 docker.go:187] disabling docker service ...
	I0601 11:15:28.560487  263941 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0601 11:15:28.569793  263941 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0601 11:15:28.578352  263941 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0601 11:15:28.655832  263941 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0601 11:15:28.735490  263941 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0601 11:15:29.855609  254820 pod_ready.go:102] pod "coredns-5644d7b6d9-5z28m" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 10:59:44 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:15:32.355089  254820 pod_ready.go:102] pod "coredns-5644d7b6d9-5z28m" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 10:59:44 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:15:28.745109  263941 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0601 11:15:28.757663  263941 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*sandbox_image = .*$|sandbox_image = "k8s.gcr.io/pause:3.6"|' -i /etc/containerd/config.toml"
	I0601 11:15:28.765775  263941 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*restrict_oom_score_adj = .*$|restrict_oom_score_adj = false|' -i /etc/containerd/config.toml"
	I0601 11:15:28.774408  263941 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*SystemdCgroup = .*$|SystemdCgroup = false|' -i /etc/containerd/config.toml"
	I0601 11:15:28.781883  263941 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*conf_dir = .*$|conf_dir = "/etc/cni/net.mk"|' -i /etc/containerd/config.toml"
	I0601 11:15:28.789580  263941 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^# imports|imports = ["/etc/containerd/containerd.conf.d/02-containerd.conf"]|' -i /etc/containerd/config.toml"
	I0601 11:15:28.796858  263941 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc/containerd/containerd.conf.d && printf %!s(MISSING) "dmVyc2lvbiA9IDIK" | base64 -d | sudo tee /etc/containerd/containerd.conf.d/02-containerd.conf"
	I0601 11:15:28.808855  263941 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0601 11:15:28.814976  263941 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0601 11:15:28.821274  263941 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0601 11:15:28.890122  263941 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0601 11:15:28.957431  263941 start.go:447] Will wait 60s for socket path /run/containerd/containerd.sock
	I0601 11:15:28.957500  263941 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0601 11:15:28.960944  263941 start.go:468] Will wait 60s for crictl version
	I0601 11:15:28.960999  263941 ssh_runner.go:195] Run: sudo crictl version
	I0601 11:15:28.989353  263941 retry.go:31] will retry after 11.04660288s: Temporary Error: sudo crictl version: Process exited with status 1
	stdout:
	
	stderr:
	time="2022-06-01T11:15:28Z" level=fatal msg="getting the runtime version: rpc error: code = Unknown desc = server is not initialized yet"
	I0601 11:15:34.355560  254820 pod_ready.go:102] pod "coredns-5644d7b6d9-5z28m" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 10:59:44 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:15:36.855139  254820 pod_ready.go:102] pod "coredns-5644d7b6d9-5z28m" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 10:59:44 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:15:40.037202  263941 ssh_runner.go:195] Run: sudo crictl version
	I0601 11:15:40.060106  263941 start.go:477] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.6.4
	RuntimeApiVersion:  v1alpha2
	I0601 11:15:40.060158  263941 ssh_runner.go:195] Run: containerd --version
	I0601 11:15:40.087408  263941 ssh_runner.go:195] Run: containerd --version
	I0601 11:15:40.116134  263941 out.go:177] * Preparing Kubernetes v1.23.6 on containerd 1.6.4 ...
	I0601 11:15:40.117565  263941 cli_runner.go:164] Run: docker network inspect newest-cni-20220601111420-6708 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0601 11:15:40.149078  263941 ssh_runner.go:195] Run: grep 192.168.67.1	host.minikube.internal$ /etc/hosts
	I0601 11:15:40.152412  263941 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.67.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0601 11:15:40.163666  263941 out.go:177]   - kubelet.network-plugin=cni
	I0601 11:15:40.165260  263941 out.go:177]   - kubeadm.pod-network-cidr=192.168.111.111/16
	I0601 11:15:40.166639  263941 out.go:177]   - kubelet.cni-conf-dir=/etc/cni/net.mk
	I0601 11:15:38.855353  254820 pod_ready.go:102] pod "coredns-5644d7b6d9-5z28m" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 10:59:44 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:15:41.355553  254820 pod_ready.go:102] pod "coredns-5644d7b6d9-5z28m" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 10:59:44 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:15:40.167968  263941 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime containerd
	I0601 11:15:40.168021  263941 ssh_runner.go:195] Run: sudo crictl images --output json
	I0601 11:15:40.190473  263941 containerd.go:547] all images are preloaded for containerd runtime.
	I0601 11:15:40.190492  263941 containerd.go:461] Images already preloaded, skipping extraction
	I0601 11:15:40.190538  263941 ssh_runner.go:195] Run: sudo crictl images --output json
	I0601 11:15:40.212585  263941 containerd.go:547] all images are preloaded for containerd runtime.
	I0601 11:15:40.212603  263941 cache_images.go:84] Images are preloaded, skipping loading
	I0601 11:15:40.212648  263941 ssh_runner.go:195] Run: sudo crictl info
	I0601 11:15:40.235894  263941 cni.go:95] Creating CNI manager for ""
	I0601 11:15:40.235921  263941 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0601 11:15:40.235938  263941 kubeadm.go:87] Using pod CIDR: 192.168.111.111/16
	I0601 11:15:40.235956  263941 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:192.168.111.111/16 AdvertiseAddress:192.168.67.2 APIServerPort:8443 KubernetesVersion:v1.23.6 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-20220601111420-6708 NodeName:newest-cni-20220601111420-6708 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota feature-gates:ServerSideApply=true] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.67.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[feature-gates:ServerSideApply=true leader-elect
:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.67.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0601 11:15:40.236110  263941 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.67.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "newest-cni-20220601111420-6708"
	  kubeletExtraArgs:
	    node-ip: 192.168.67.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.67.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	    feature-gates: "ServerSideApply=true"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.23.6
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "192.168.111.111/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "192.168.111.111/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0601 11:15:40.236185  263941 kubeadm.go:961] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.23.6/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cni-conf-dir=/etc/cni/net.mk --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --feature-gates=ServerSideApply=true --hostname-override=newest-cni-20220601111420-6708 --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.67.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.23.6 ClusterName:newest-cni-20220601111420-6708 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:network-plugin Value:cni} {Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16} {Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0601 11:15:40.236231  263941 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.23.6
	I0601 11:15:40.242822  263941 binaries.go:44] Found k8s binaries, skipping transfer
	I0601 11:15:40.242876  263941 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0601 11:15:40.249239  263941 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (612 bytes)
	I0601 11:15:40.261710  263941 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0601 11:15:40.273575  263941 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2195 bytes)
	I0601 11:15:40.285871  263941 ssh_runner.go:195] Run: grep 192.168.67.2	control-plane.minikube.internal$ /etc/hosts
	I0601 11:15:40.288735  263941 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.67.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0601 11:15:40.297596  263941 certs.go:54] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/newest-cni-20220601111420-6708 for IP: 192.168.67.2
	I0601 11:15:40.297679  263941 certs.go:182] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.key
	I0601 11:15:40.297717  263941 certs.go:182] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/proxy-client-ca.key
	I0601 11:15:40.297792  263941 certs.go:298] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/newest-cni-20220601111420-6708/client.key
	I0601 11:15:40.297858  263941 certs.go:298] skipping minikube signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/newest-cni-20220601111420-6708/apiserver.key.c7fa3a9e
	I0601 11:15:40.297891  263941 certs.go:298] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/newest-cni-20220601111420-6708/proxy-client.key
	I0601 11:15:40.297985  263941 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/6708.pem (1338 bytes)
	W0601 11:15:40.298016  263941 certs.go:384] ignoring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/6708_empty.pem, impossibly tiny 0 bytes
	I0601 11:15:40.298027  263941 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca-key.pem (1679 bytes)
	I0601 11:15:40.298054  263941 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca.pem (1082 bytes)
	I0601 11:15:40.298091  263941 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/cert.pem (1123 bytes)
	I0601 11:15:40.298119  263941 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/key.pem (1679 bytes)
	I0601 11:15:40.298157  263941 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files/etc/ssl/certs/67082.pem (1708 bytes)
	I0601 11:15:40.299260  263941 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/newest-cni-20220601111420-6708/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0601 11:15:40.316519  263941 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/newest-cni-20220601111420-6708/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0601 11:15:40.332783  263941 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/newest-cni-20220601111420-6708/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0601 11:15:40.348789  263941 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/newest-cni-20220601111420-6708/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0601 11:15:40.365111  263941 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0601 11:15:40.381257  263941 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0601 11:15:40.397766  263941 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0601 11:15:40.414346  263941 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0601 11:15:40.430808  263941 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/6708.pem --> /usr/share/ca-certificates/6708.pem (1338 bytes)
	I0601 11:15:40.446818  263941 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files/etc/ssl/certs/67082.pem --> /usr/share/ca-certificates/67082.pem (1708 bytes)
	I0601 11:15:40.462868  263941 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0601 11:15:40.479131  263941 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0601 11:15:40.491365  263941 ssh_runner.go:195] Run: openssl version
	I0601 11:15:40.495890  263941 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6708.pem && ln -fs /usr/share/ca-certificates/6708.pem /etc/ssl/certs/6708.pem"
	I0601 11:15:40.502904  263941 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6708.pem
	I0601 11:15:40.505746  263941 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Jun  1 10:24 /usr/share/ca-certificates/6708.pem
	I0601 11:15:40.505788  263941 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6708.pem
	I0601 11:15:40.510219  263941 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6708.pem /etc/ssl/certs/51391683.0"
	I0601 11:15:40.517063  263941 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/67082.pem && ln -fs /usr/share/ca-certificates/67082.pem /etc/ssl/certs/67082.pem"
	I0601 11:15:40.524064  263941 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/67082.pem
	I0601 11:15:40.526999  263941 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Jun  1 10:24 /usr/share/ca-certificates/67082.pem
	I0601 11:15:40.527033  263941 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/67082.pem
	I0601 11:15:40.531472  263941 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/67082.pem /etc/ssl/certs/3ec20f2e.0"
	I0601 11:15:40.537733  263941 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0601 11:15:40.544625  263941 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0601 11:15:40.547474  263941 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Jun  1 10:20 /usr/share/ca-certificates/minikubeCA.pem
	I0601 11:15:40.547517  263941 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0601 11:15:40.552007  263941 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0601 11:15:40.558495  263941 kubeadm.go:395] StartCluster: {Name:newest-cni-20220601111420-6708 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:newest-cni-20220601111420-6708 Namespace:default APIServerName:
minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:network-plugin Value:cni} {Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16} {Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.5.1@sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[Me
tricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0601 11:15:40.558578  263941 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0601 11:15:40.558609  263941 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0601 11:15:40.583148  263941 cri.go:87] found id: "485452f005e99f0f22bb03c4ca8c82fba5f6780c94070d8df7bd99044ffee4b9"
	I0601 11:15:40.583176  263941 cri.go:87] found id: "b507e47c39da0409cb255658e88a3d49752f7f998cf2f1cc34d63f78b7ddd011"
	I0601 11:15:40.583186  263941 cri.go:87] found id: "75894607da19e2350146806345ec03c2d10cca1c7d18ad1af5ec86948940418a"
	I0601 11:15:40.583195  263941 cri.go:87] found id: "4a9c8064a88cb9fe7f7cd29efcb27e351ec7ee84e8b53663018f531249450887"
	I0601 11:15:40.583205  263941 cri.go:87] found id: "e26d2e029235f2fee5bc272de83b26dbe1b0ceaa62fa9f5fd6fc9cad51d82152"
	I0601 11:15:40.583214  263941 cri.go:87] found id: "80ce00d6c8d0224e68bd82723b28724cb9452b78a8ff6d2c4494f3f6e02a6931"
	I0601 11:15:40.583220  263941 cri.go:87] found id: ""
	I0601 11:15:40.583261  263941 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0601 11:15:40.595217  263941 cri.go:114] JSON = null
	W0601 11:15:40.595264  263941 kubeadm.go:402] unpause failed: list paused: list returned 0 containers, but ps returned 6
	I0601 11:15:40.595339  263941 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0601 11:15:40.601849  263941 kubeadm.go:410] found existing configuration files, will attempt cluster restart
	I0601 11:15:40.601867  263941 kubeadm.go:626] restartCluster start
	I0601 11:15:40.601901  263941 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0601 11:15:40.607903  263941 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0601 11:15:40.609082  263941 kubeconfig.go:116] verify returned: extract IP: "newest-cni-20220601111420-6708" does not appear in /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig
	I0601 11:15:40.609970  263941 kubeconfig.go:127] "newest-cni-20220601111420-6708" context is missing from /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig - will repair!
	I0601 11:15:40.611243  263941 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig: {Name:mkb0e16236e54fbc8651999f1dd70854c53de7db Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0601 11:15:40.612890  263941 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0601 11:15:40.619118  263941 api_server.go:165] Checking apiserver status ...
	I0601 11:15:40.619151  263941 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 11:15:40.626770  263941 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 11:15:40.827164  263941 api_server.go:165] Checking apiserver status ...
	I0601 11:15:40.827243  263941 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 11:15:40.835913  263941 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 11:15:41.027171  263941 api_server.go:165] Checking apiserver status ...
	I0601 11:15:41.027231  263941 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 11:15:41.035630  263941 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 11:15:41.226875  263941 api_server.go:165] Checking apiserver status ...
	I0601 11:15:41.226947  263941 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 11:15:41.235260  263941 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 11:15:41.427590  263941 api_server.go:165] Checking apiserver status ...
	I0601 11:15:41.427664  263941 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 11:15:41.436391  263941 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 11:15:41.627713  263941 api_server.go:165] Checking apiserver status ...
	I0601 11:15:41.627790  263941 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 11:15:41.636578  263941 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 11:15:41.827917  263941 api_server.go:165] Checking apiserver status ...
	I0601 11:15:41.827985  263941 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 11:15:41.836472  263941 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 11:15:42.027740  263941 api_server.go:165] Checking apiserver status ...
	I0601 11:15:42.027825  263941 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 11:15:42.036551  263941 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 11:15:42.227897  263941 api_server.go:165] Checking apiserver status ...
	I0601 11:15:42.227973  263941 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 11:15:42.236376  263941 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 11:15:42.427645  263941 api_server.go:165] Checking apiserver status ...
	I0601 11:15:42.427728  263941 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 11:15:42.436268  263941 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 11:15:42.627544  263941 api_server.go:165] Checking apiserver status ...
	I0601 11:15:42.627605  263941 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 11:15:42.636087  263941 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 11:15:42.827365  263941 api_server.go:165] Checking apiserver status ...
	I0601 11:15:42.827443  263941 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 11:15:42.835962  263941 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 11:15:43.027237  263941 api_server.go:165] Checking apiserver status ...
	I0601 11:15:43.027309  263941 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 11:15:43.035670  263941 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 11:15:43.226878  263941 api_server.go:165] Checking apiserver status ...
	I0601 11:15:43.226948  263941 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 11:15:43.235382  263941 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 11:15:43.427671  263941 api_server.go:165] Checking apiserver status ...
	I0601 11:15:43.427744  263941 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 11:15:43.436178  263941 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 11:15:43.627481  263941 api_server.go:165] Checking apiserver status ...
	I0601 11:15:43.627549  263941 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 11:15:43.635795  263941 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 11:15:43.635816  263941 api_server.go:165] Checking apiserver status ...
	I0601 11:15:43.635857  263941 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 11:15:43.643323  263941 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 11:15:43.643343  263941 kubeadm.go:601] needs reconfigure: apiserver error: timed out waiting for the condition
	I0601 11:15:43.643349  263941 kubeadm.go:1092] stopping kube-system containers ...
	I0601 11:15:43.643362  263941 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I0601 11:15:43.643409  263941 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0601 11:15:43.666360  263941 cri.go:87] found id: "485452f005e99f0f22bb03c4ca8c82fba5f6780c94070d8df7bd99044ffee4b9"
	I0601 11:15:43.666384  263941 cri.go:87] found id: "b507e47c39da0409cb255658e88a3d49752f7f998cf2f1cc34d63f78b7ddd011"
	I0601 11:15:43.666392  263941 cri.go:87] found id: "75894607da19e2350146806345ec03c2d10cca1c7d18ad1af5ec86948940418a"
	I0601 11:15:43.666399  263941 cri.go:87] found id: "4a9c8064a88cb9fe7f7cd29efcb27e351ec7ee84e8b53663018f531249450887"
	I0601 11:15:43.666405  263941 cri.go:87] found id: "e26d2e029235f2fee5bc272de83b26dbe1b0ceaa62fa9f5fd6fc9cad51d82152"
	I0601 11:15:43.666411  263941 cri.go:87] found id: "80ce00d6c8d0224e68bd82723b28724cb9452b78a8ff6d2c4494f3f6e02a6931"
	I0601 11:15:43.666428  263941 cri.go:87] found id: ""
	I0601 11:15:43.666434  263941 cri.go:232] Stopping containers: [485452f005e99f0f22bb03c4ca8c82fba5f6780c94070d8df7bd99044ffee4b9 b507e47c39da0409cb255658e88a3d49752f7f998cf2f1cc34d63f78b7ddd011 75894607da19e2350146806345ec03c2d10cca1c7d18ad1af5ec86948940418a 4a9c8064a88cb9fe7f7cd29efcb27e351ec7ee84e8b53663018f531249450887 e26d2e029235f2fee5bc272de83b26dbe1b0ceaa62fa9f5fd6fc9cad51d82152 80ce00d6c8d0224e68bd82723b28724cb9452b78a8ff6d2c4494f3f6e02a6931]
	I0601 11:15:43.666476  263941 ssh_runner.go:195] Run: which crictl
	I0601 11:15:43.669061  263941 ssh_runner.go:195] Run: sudo /usr/bin/crictl stop 485452f005e99f0f22bb03c4ca8c82fba5f6780c94070d8df7bd99044ffee4b9 b507e47c39da0409cb255658e88a3d49752f7f998cf2f1cc34d63f78b7ddd011 75894607da19e2350146806345ec03c2d10cca1c7d18ad1af5ec86948940418a 4a9c8064a88cb9fe7f7cd29efcb27e351ec7ee84e8b53663018f531249450887 e26d2e029235f2fee5bc272de83b26dbe1b0ceaa62fa9f5fd6fc9cad51d82152 80ce00d6c8d0224e68bd82723b28724cb9452b78a8ff6d2c4494f3f6e02a6931
	I0601 11:15:43.693039  263941 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0601 11:15:43.702809  263941 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0601 11:15:43.709582  263941 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5643 Jun  1 11:14 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5656 Jun  1 11:14 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2059 Jun  1 11:14 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5600 Jun  1 11:14 /etc/kubernetes/scheduler.conf
	
	I0601 11:15:43.709641  263941 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0601 11:15:43.716303  263941 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0601 11:15:43.722697  263941 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0601 11:15:43.729346  263941 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0601 11:15:43.729383  263941 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0601 11:15:43.736021  263941 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0601 11:15:43.855489  254820 pod_ready.go:102] pod "coredns-5644d7b6d9-5z28m" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 10:59:44 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:15:45.856143  254820 pod_ready.go:102] pod "coredns-5644d7b6d9-5z28m" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 10:59:44 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:15:48.355487  254820 pod_ready.go:102] pod "coredns-5644d7b6d9-5z28m" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 10:59:44 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:15:43.742199  263941 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0601 11:15:43.744599  263941 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0601 11:15:43.750661  263941 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0601 11:15:43.757080  263941 kubeadm.go:703] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0601 11:15:43.757099  263941 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0601 11:15:43.799332  263941 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0601 11:15:44.640811  263941 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0601 11:15:44.778775  263941 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0601 11:15:44.825201  263941 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0601 11:15:44.878693  263941 api_server.go:51] waiting for apiserver process to appear ...
	I0601 11:15:44.878756  263941 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:15:45.387292  263941 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:15:45.887437  263941 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:15:46.386681  263941 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:15:46.887711  263941 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:15:47.386774  263941 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:15:47.886999  263941 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:15:48.387146  263941 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:15:50.856156  254820 pod_ready.go:102] pod "coredns-5644d7b6d9-5z28m" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 10:59:44 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:15:53.355090  254820 pod_ready.go:102] pod "coredns-5644d7b6d9-5z28m" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 10:59:44 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:15:48.887298  263941 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:15:49.387361  263941 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:15:49.887409  263941 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:15:50.386663  263941 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:15:50.887194  263941 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:15:50.963594  263941 api_server.go:71] duration metric: took 6.084905922s to wait for apiserver process to appear ...
	I0601 11:15:50.963625  263941 api_server.go:87] waiting for apiserver healthz status ...
	I0601 11:15:50.963637  263941 api_server.go:240] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0601 11:15:50.964071  263941 api_server.go:256] stopped: https://192.168.67.2:8443/healthz: Get "https://192.168.67.2:8443/healthz": dial tcp 192.168.67.2:8443: connect: connection refused
	I0601 11:15:51.464797  263941 api_server.go:240] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0601 11:15:54.285946  263941 api_server.go:266] https://192.168.67.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0601 11:15:54.285972  263941 api_server.go:102] status: https://192.168.67.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0601 11:15:54.464237  263941 api_server.go:240] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0601 11:15:54.470757  263941 api_server.go:266] https://192.168.67.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0601 11:15:54.470781  263941 api_server.go:102] status: https://192.168.67.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0601 11:15:54.965028  263941 api_server.go:240] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0601 11:15:54.969042  263941 api_server.go:266] https://192.168.67.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0601 11:15:54.969073  263941 api_server.go:102] status: https://192.168.67.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0601 11:15:55.464581  263941 api_server.go:240] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0601 11:15:55.468499  263941 api_server.go:266] https://192.168.67.2:8443/healthz returned 200:
	ok
	I0601 11:15:55.474952  263941 api_server.go:140] control plane version: v1.23.6
	I0601 11:15:55.474977  263941 api_server.go:130] duration metric: took 4.511346167s to wait for apiserver health ...
	I0601 11:15:55.474987  263941 cni.go:95] Creating CNI manager for ""
	I0601 11:15:55.474993  263941 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0601 11:15:55.476649  263941 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0601 11:15:55.478132  263941 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0601 11:15:55.481790  263941 cni.go:189] applying CNI manifest using /var/lib/minikube/binaries/v1.23.6/kubectl ...
	I0601 11:15:55.481811  263941 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0601 11:15:55.495316  263941 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0601 11:15:56.321166  263941 system_pods.go:43] waiting for kube-system pods to appear ...
	I0601 11:15:56.328415  263941 system_pods.go:59] 9 kube-system pods found
	I0601 11:15:56.328445  263941 system_pods.go:61] "coredns-64897985d-84z4b" [61675132-c8b8-4faa-81c0-afc49bcb9115] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
	I0601 11:15:56.328454  263941 system_pods.go:61] "etcd-newest-cni-20220601111420-6708" [cdadc9cc-7472-44f1-8727-1510dd722c1e] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0601 11:15:56.328472  263941 system_pods.go:61] "kindnet-mrmdr" [a0f0d07c-f270-4d94-a1e1-739b17c1abfd] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0601 11:15:56.328482  263941 system_pods.go:61] "kube-apiserver-newest-cni-20220601111420-6708" [2c130ee1-e9f5-4d2f-8871-e71f31fb1c74] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0601 11:15:56.328500  263941 system_pods.go:61] "kube-controller-manager-newest-cni-20220601111420-6708" [5dac330c-fddb-4715-90fd-e1b77c0f6b67] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0601 11:15:56.328509  263941 system_pods.go:61] "kube-proxy-n497l" [015d9acb-73fb-47e5-bb6c-9856dc97937f] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0601 11:15:56.328515  263941 system_pods.go:61] "kube-scheduler-newest-cni-20220601111420-6708" [4dc87b0e-7ab2-41a0-9cc4-4c358a05e707] Running
	I0601 11:15:56.328526  263941 system_pods.go:61] "metrics-server-b955d9d8-l7q2p" [7dc19801-604c-4dea-8f7c-f149d7c519db] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
	I0601 11:15:56.328535  263941 system_pods.go:61] "storage-provisioner" [8afd3e8e-ea29-4b3e-b4dd-ce13b93d0469] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
	I0601 11:15:56.328539  263941 system_pods.go:74] duration metric: took 7.354592ms to wait for pod list to return data ...
	I0601 11:15:56.328550  263941 node_conditions.go:102] verifying NodePressure condition ...
	I0601 11:15:56.331379  263941 node_conditions.go:122] node storage ephemeral capacity is 304695084Ki
	I0601 11:15:56.331405  263941 node_conditions.go:123] node cpu capacity is 8
	I0601 11:15:56.331415  263941 node_conditions.go:105] duration metric: took 2.860329ms to run NodePressure ...
	I0601 11:15:56.331429  263941 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0601 11:15:56.521154  263941 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0601 11:15:56.528009  263941 ops.go:34] apiserver oom_adj: -16
	I0601 11:15:56.528032  263941 kubeadm.go:630] restartCluster took 15.926157917s
	I0601 11:15:56.528040  263941 kubeadm.go:397] StartCluster complete in 15.969552087s
	I0601 11:15:56.528055  263941 settings.go:142] acquiring lock: {Name:mk20a847233fa50399ab0a24280bffb8d8dbd41b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0601 11:15:56.528150  263941 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig
	I0601 11:15:56.529675  263941 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig: {Name:mkb0e16236e54fbc8651999f1dd70854c53de7db Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0601 11:15:56.533017  263941 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "newest-cni-20220601111420-6708" rescaled to 1
	I0601 11:15:56.533078  263941 start.go:208] Will wait 6m0s for node &{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0601 11:15:56.533096  263941 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0601 11:15:56.535966  263941 out.go:177] * Verifying Kubernetes components...
	I0601 11:15:56.533184  263941 addons.go:415] enableAddons start: toEnable=map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true], additional=[]
	I0601 11:15:56.533325  263941 config.go:178] Loaded profile config "newest-cni-20220601111420-6708": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.6
	I0601 11:15:56.537639  263941 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0601 11:15:56.537656  263941 addons.go:65] Setting metrics-server=true in profile "newest-cni-20220601111420-6708"
	I0601 11:15:56.537672  263941 addons.go:153] Setting addon metrics-server=true in "newest-cni-20220601111420-6708"
	I0601 11:15:56.537676  263941 addons.go:65] Setting default-storageclass=true in profile "newest-cni-20220601111420-6708"
	W0601 11:15:56.537684  263941 addons.go:165] addon metrics-server should already be in state true
	I0601 11:15:56.537694  263941 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-20220601111420-6708"
	I0601 11:15:56.537644  263941 addons.go:65] Setting storage-provisioner=true in profile "newest-cni-20220601111420-6708"
	I0601 11:15:56.537706  263941 addons.go:65] Setting dashboard=true in profile "newest-cni-20220601111420-6708"
	I0601 11:15:56.537742  263941 host.go:66] Checking if "newest-cni-20220601111420-6708" exists ...
	I0601 11:15:56.537756  263941 addons.go:153] Setting addon storage-provisioner=true in "newest-cni-20220601111420-6708"
	W0601 11:15:56.537788  263941 addons.go:165] addon storage-provisioner should already be in state true
	I0601 11:15:56.537848  263941 host.go:66] Checking if "newest-cni-20220601111420-6708" exists ...
	I0601 11:15:56.537761  263941 addons.go:153] Setting addon dashboard=true in "newest-cni-20220601111420-6708"
	W0601 11:15:56.537913  263941 addons.go:165] addon dashboard should already be in state true
	I0601 11:15:56.537975  263941 host.go:66] Checking if "newest-cni-20220601111420-6708" exists ...
	I0601 11:15:56.538058  263941 cli_runner.go:164] Run: docker container inspect newest-cni-20220601111420-6708 --format={{.State.Status}}
	I0601 11:15:56.538239  263941 cli_runner.go:164] Run: docker container inspect newest-cni-20220601111420-6708 --format={{.State.Status}}
	I0601 11:15:56.538387  263941 cli_runner.go:164] Run: docker container inspect newest-cni-20220601111420-6708 --format={{.State.Status}}
	I0601 11:15:56.538439  263941 cli_runner.go:164] Run: docker container inspect newest-cni-20220601111420-6708 --format={{.State.Status}}
	I0601 11:15:56.585758  263941 out.go:177]   - Using image k8s.gcr.io/echoserver:1.4
	I0601 11:15:56.587350  263941 out.go:177]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I0601 11:15:56.588849  263941 out.go:177]   - Using image kubernetesui/dashboard:v2.5.1
	I0601 11:15:56.590411  263941 addons.go:348] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0601 11:15:56.590428  263941 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0601 11:15:56.590467  263941 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220601111420-6708
	I0601 11:15:56.588805  263941 addons.go:348] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0601 11:15:56.590499  263941 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0601 11:15:56.592950  263941 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0601 11:15:56.590546  263941 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220601111420-6708
	I0601 11:15:56.595053  263941 addons.go:348] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0601 11:15:56.595071  263941 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0601 11:15:56.595138  263941 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220601111420-6708
	I0601 11:15:56.603468  263941 addons.go:153] Setting addon default-storageclass=true in "newest-cni-20220601111420-6708"
	W0601 11:15:56.603493  263941 addons.go:165] addon default-storageclass should already be in state true
	I0601 11:15:56.603522  263941 host.go:66] Checking if "newest-cni-20220601111420-6708" exists ...
	I0601 11:15:56.603903  263941 cli_runner.go:164] Run: docker container inspect newest-cni-20220601111420-6708 --format={{.State.Status}}
	I0601 11:15:56.619336  263941 api_server.go:51] waiting for apiserver process to appear ...
	I0601 11:15:56.619430  263941 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:15:56.619353  263941 start.go:786] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0601 11:15:56.636422  263941 api_server.go:71] duration metric: took 103.308361ms to wait for apiserver process to appear ...
	I0601 11:15:56.636452  263941 api_server.go:87] waiting for apiserver healthz status ...
	I0601 11:15:56.636465  263941 api_server.go:240] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0601 11:15:56.636778  263941 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49432 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/newest-cni-20220601111420-6708/id_rsa Username:docker}
	I0601 11:15:56.641977  263941 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49432 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/newest-cni-20220601111420-6708/id_rsa Username:docker}
	I0601 11:15:56.642020  263941 api_server.go:266] https://192.168.67.2:8443/healthz returned 200:
	ok
	I0601 11:15:56.642243  263941 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49432 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/newest-cni-20220601111420-6708/id_rsa Username:docker}
	I0601 11:15:56.644077  263941 api_server.go:140] control plane version: v1.23.6
	I0601 11:15:56.644097  263941 api_server.go:130] duration metric: took 7.638012ms to wait for apiserver health ...
	I0601 11:15:56.644106  263941 system_pods.go:43] waiting for kube-system pods to appear ...
	I0601 11:15:56.646495  263941 addons.go:348] installing /etc/kubernetes/addons/storageclass.yaml
	I0601 11:15:56.646517  263941 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0601 11:15:56.646565  263941 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220601111420-6708
	I0601 11:15:56.650901  263941 system_pods.go:59] 9 kube-system pods found
	I0601 11:15:56.650933  263941 system_pods.go:61] "coredns-64897985d-84z4b" [61675132-c8b8-4faa-81c0-afc49bcb9115] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
	I0601 11:15:56.650944  263941 system_pods.go:61] "etcd-newest-cni-20220601111420-6708" [cdadc9cc-7472-44f1-8727-1510dd722c1e] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0601 11:15:56.650954  263941 system_pods.go:61] "kindnet-mrmdr" [a0f0d07c-f270-4d94-a1e1-739b17c1abfd] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0601 11:15:56.650969  263941 system_pods.go:61] "kube-apiserver-newest-cni-20220601111420-6708" [2c130ee1-e9f5-4d2f-8871-e71f31fb1c74] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0601 11:15:56.650987  263941 system_pods.go:61] "kube-controller-manager-newest-cni-20220601111420-6708" [5dac330c-fddb-4715-90fd-e1b77c0f6b67] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0601 11:15:56.651000  263941 system_pods.go:61] "kube-proxy-n497l" [015d9acb-73fb-47e5-bb6c-9856dc97937f] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0601 11:15:56.651017  263941 system_pods.go:61] "kube-scheduler-newest-cni-20220601111420-6708" [4dc87b0e-7ab2-41a0-9cc4-4c358a05e707] Running
	I0601 11:15:56.651030  263941 system_pods.go:61] "metrics-server-b955d9d8-l7q2p" [7dc19801-604c-4dea-8f7c-f149d7c519db] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
	I0601 11:15:56.651042  263941 system_pods.go:61] "storage-provisioner" [8afd3e8e-ea29-4b3e-b4dd-ce13b93d0469] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
	I0601 11:15:56.651048  263941 system_pods.go:74] duration metric: took 6.936145ms to wait for pod list to return data ...
	I0601 11:15:56.651060  263941 default_sa.go:34] waiting for default service account to be created ...
	I0601 11:15:56.653680  263941 default_sa.go:45] found service account: "default"
	I0601 11:15:56.653708  263941 default_sa.go:55] duration metric: took 2.640942ms for default service account to be created ...
	I0601 11:15:56.653719  263941 kubeadm.go:572] duration metric: took 120.609786ms to wait for : map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] ...
	I0601 11:15:56.653752  263941 node_conditions.go:102] verifying NodePressure condition ...
	I0601 11:15:56.656341  263941 node_conditions.go:122] node storage ephemeral capacity is 304695084Ki
	I0601 11:15:56.656360  263941 node_conditions.go:123] node cpu capacity is 8
	I0601 11:15:56.656374  263941 node_conditions.go:105] duration metric: took 2.616904ms to run NodePressure ...
	I0601 11:15:56.656386  263941 start.go:213] waiting for startup goroutines ...
	I0601 11:15:56.684635  263941 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49432 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/newest-cni-20220601111420-6708/id_rsa Username:docker}
	I0601 11:15:56.729362  263941 addons.go:348] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0601 11:15:56.729387  263941 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0601 11:15:56.737513  263941 addons.go:348] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0601 11:15:56.737533  263941 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1849 bytes)
	I0601 11:15:56.738082  263941 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0601 11:15:56.742656  263941 addons.go:348] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0601 11:15:56.742678  263941 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0601 11:15:56.751813  263941 addons.go:348] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0601 11:15:56.751842  263941 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0601 11:15:56.756435  263941 addons.go:348] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0601 11:15:56.756456  263941 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0601 11:15:56.766690  263941 addons.go:348] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0601 11:15:56.766709  263941 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0601 11:15:56.771728  263941 addons.go:348] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0601 11:15:56.771748  263941 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4196 bytes)
	I0601 11:15:56.779554  263941 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0601 11:15:56.780709  263941 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0601 11:15:56.785464  263941 addons.go:348] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0601 11:15:56.785491  263941 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0601 11:15:56.861755  263941 addons.go:348] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0601 11:15:56.861784  263941 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0601 11:15:56.885416  263941 addons.go:348] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0601 11:15:56.885451  263941 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0601 11:15:56.964682  263941 addons.go:348] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0601 11:15:56.964712  263941 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0601 11:15:56.981552  263941 addons.go:348] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0601 11:15:56.981583  263941 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0601 11:15:57.060456  263941 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0601 11:15:57.254453  263941 addons.go:386] Verifying addon metrics-server=true in "newest-cni-20220601111420-6708"
	I0601 11:15:57.416236  263941 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I0601 11:15:57.417673  263941 addons.go:417] enableAddons completed in 884.4947ms
	I0601 11:15:57.455697  263941 start.go:504] kubectl: 1.24.1, cluster: 1.23.6 (minor skew: 1)
	I0601 11:15:57.458150  263941 out.go:177] * Done! kubectl is now configured to use "newest-cni-20220601111420-6708" cluster and "default" namespace by default
	I0601 11:15:55.356481  254820 pod_ready.go:102] pod "coredns-5644d7b6d9-5z28m" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 10:59:44 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:15:57.855372  254820 pod_ready.go:102] pod "coredns-5644d7b6d9-5z28m" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 10:59:44 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:15:59.855419  254820 pod_ready.go:102] pod "coredns-5644d7b6d9-5z28m" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 10:59:44 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:16:02.354744  254820 pod_ready.go:102] pod "coredns-5644d7b6d9-5z28m" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 10:59:44 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:16:04.355179  254820 pod_ready.go:102] pod "coredns-5644d7b6d9-5z28m" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 10:59:44 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:16:06.355418  254820 pod_ready.go:102] pod "coredns-5644d7b6d9-5z28m" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 10:59:44 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:16:08.355954  254820 pod_ready.go:102] pod "coredns-5644d7b6d9-5z28m" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 10:59:44 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID
	f1db42bbd17fa       6de166512aa22       3 minutes ago       Exited              kindnet-cni               3                   20ed2db10bff6
	4a8be0c7cfc53       4c03754524064       12 minutes ago      Running             kube-proxy                0                   043df8eb6f8fb
	d49ab0e8a34f4       8fa62c12256df       12 minutes ago      Running             kube-apiserver            0                   31f96fd01399b
	c32cb0a91408a       df7b72818ad2e       12 minutes ago      Running             kube-controller-manager   0                   87ef42c5de136
	a985029383eb2       595f327f224a4       12 minutes ago      Running             kube-scheduler            0                   a4a80ab623aae
	b8dd730d917c4       25f8c7f3da61c       12 minutes ago      Running             etcd                      0                   dfde8cf669db7
	
	* 
	* ==> containerd <==
	* -- Logs begin at Wed 2022-06-01 11:03:36 UTC, end at Wed 2022-06-01 11:16:12 UTC. --
	Jun 01 11:09:31 embed-certs-20220601110327-6708 containerd[516]: time="2022-06-01T11:09:31.490278883Z" level=warning msg="cleaning up after shim disconnected" id=44a64d6574af41b7959f71d1dab2a88484c78c34edb54a7a824ddd43a44b981e namespace=k8s.io
	Jun 01 11:09:31 embed-certs-20220601110327-6708 containerd[516]: time="2022-06-01T11:09:31.490292961Z" level=info msg="cleaning up dead shim"
	Jun 01 11:09:31 embed-certs-20220601110327-6708 containerd[516]: time="2022-06-01T11:09:31.499482545Z" level=warning msg="cleanup warnings time=\"2022-06-01T11:09:31Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2407 runtime=io.containerd.runc.v2\n"
	Jun 01 11:09:32 embed-certs-20220601110327-6708 containerd[516]: time="2022-06-01T11:09:32.336053606Z" level=info msg="RemoveContainer for \"303244519eacb93040778925202eb35640233defc4ec16bdee987993557c7494\""
	Jun 01 11:09:32 embed-certs-20220601110327-6708 containerd[516]: time="2022-06-01T11:09:32.342792466Z" level=info msg="RemoveContainer for \"303244519eacb93040778925202eb35640233defc4ec16bdee987993557c7494\" returns successfully"
	Jun 01 11:09:43 embed-certs-20220601110327-6708 containerd[516]: time="2022-06-01T11:09:43.691241193Z" level=info msg="CreateContainer within sandbox \"20ed2db10bff6252ad2001c172710e70a53dd349d97b8a17babf3a47f9171c43\" for container &ContainerMetadata{Name:kindnet-cni,Attempt:2,}"
	Jun 01 11:09:43 embed-certs-20220601110327-6708 containerd[516]: time="2022-06-01T11:09:43.704218206Z" level=info msg="CreateContainer within sandbox \"20ed2db10bff6252ad2001c172710e70a53dd349d97b8a17babf3a47f9171c43\" for &ContainerMetadata{Name:kindnet-cni,Attempt:2,} returns container id \"6adba67c52be0448acfaf806e89c061a9bebadc9090b607e58a012634b901e65\""
	Jun 01 11:09:43 embed-certs-20220601110327-6708 containerd[516]: time="2022-06-01T11:09:43.704753941Z" level=info msg="StartContainer for \"6adba67c52be0448acfaf806e89c061a9bebadc9090b607e58a012634b901e65\""
	Jun 01 11:09:43 embed-certs-20220601110327-6708 containerd[516]: time="2022-06-01T11:09:43.772300743Z" level=info msg="StartContainer for \"6adba67c52be0448acfaf806e89c061a9bebadc9090b607e58a012634b901e65\" returns successfully"
	Jun 01 11:12:23 embed-certs-20220601110327-6708 containerd[516]: time="2022-06-01T11:12:23.994491240Z" level=info msg="shim disconnected" id=6adba67c52be0448acfaf806e89c061a9bebadc9090b607e58a012634b901e65
	Jun 01 11:12:23 embed-certs-20220601110327-6708 containerd[516]: time="2022-06-01T11:12:23.994563158Z" level=warning msg="cleaning up after shim disconnected" id=6adba67c52be0448acfaf806e89c061a9bebadc9090b607e58a012634b901e65 namespace=k8s.io
	Jun 01 11:12:23 embed-certs-20220601110327-6708 containerd[516]: time="2022-06-01T11:12:23.994577765Z" level=info msg="cleaning up dead shim"
	Jun 01 11:12:24 embed-certs-20220601110327-6708 containerd[516]: time="2022-06-01T11:12:24.004045038Z" level=warning msg="cleanup warnings time=\"2022-06-01T11:12:23Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2504 runtime=io.containerd.runc.v2\n"
	Jun 01 11:12:24 embed-certs-20220601110327-6708 containerd[516]: time="2022-06-01T11:12:24.645186775Z" level=info msg="RemoveContainer for \"44a64d6574af41b7959f71d1dab2a88484c78c34edb54a7a824ddd43a44b981e\""
	Jun 01 11:12:24 embed-certs-20220601110327-6708 containerd[516]: time="2022-06-01T11:12:24.649222973Z" level=info msg="RemoveContainer for \"44a64d6574af41b7959f71d1dab2a88484c78c34edb54a7a824ddd43a44b981e\" returns successfully"
	Jun 01 11:12:51 embed-certs-20220601110327-6708 containerd[516]: time="2022-06-01T11:12:51.691014066Z" level=info msg="CreateContainer within sandbox \"20ed2db10bff6252ad2001c172710e70a53dd349d97b8a17babf3a47f9171c43\" for container &ContainerMetadata{Name:kindnet-cni,Attempt:3,}"
	Jun 01 11:12:51 embed-certs-20220601110327-6708 containerd[516]: time="2022-06-01T11:12:51.703587307Z" level=info msg="CreateContainer within sandbox \"20ed2db10bff6252ad2001c172710e70a53dd349d97b8a17babf3a47f9171c43\" for &ContainerMetadata{Name:kindnet-cni,Attempt:3,} returns container id \"f1db42bbd17faeafd4cd3be2854a922ebf474bf60a0d563922769dd22d828187\""
	Jun 01 11:12:51 embed-certs-20220601110327-6708 containerd[516]: time="2022-06-01T11:12:51.704136363Z" level=info msg="StartContainer for \"f1db42bbd17faeafd4cd3be2854a922ebf474bf60a0d563922769dd22d828187\""
	Jun 01 11:12:51 embed-certs-20220601110327-6708 containerd[516]: time="2022-06-01T11:12:51.769975659Z" level=info msg="StartContainer for \"f1db42bbd17faeafd4cd3be2854a922ebf474bf60a0d563922769dd22d828187\" returns successfully"
	Jun 01 11:15:32 embed-certs-20220601110327-6708 containerd[516]: time="2022-06-01T11:15:32.089351164Z" level=info msg="shim disconnected" id=f1db42bbd17faeafd4cd3be2854a922ebf474bf60a0d563922769dd22d828187
	Jun 01 11:15:32 embed-certs-20220601110327-6708 containerd[516]: time="2022-06-01T11:15:32.089416099Z" level=warning msg="cleaning up after shim disconnected" id=f1db42bbd17faeafd4cd3be2854a922ebf474bf60a0d563922769dd22d828187 namespace=k8s.io
	Jun 01 11:15:32 embed-certs-20220601110327-6708 containerd[516]: time="2022-06-01T11:15:32.089443667Z" level=info msg="cleaning up dead shim"
	Jun 01 11:15:32 embed-certs-20220601110327-6708 containerd[516]: time="2022-06-01T11:15:32.098608733Z" level=warning msg="cleanup warnings time=\"2022-06-01T11:15:32Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2605 runtime=io.containerd.runc.v2\n"
	Jun 01 11:15:32 embed-certs-20220601110327-6708 containerd[516]: time="2022-06-01T11:15:32.986040568Z" level=info msg="RemoveContainer for \"6adba67c52be0448acfaf806e89c061a9bebadc9090b607e58a012634b901e65\""
	Jun 01 11:15:32 embed-certs-20220601110327-6708 containerd[516]: time="2022-06-01T11:15:32.990700608Z" level=info msg="RemoveContainer for \"6adba67c52be0448acfaf806e89c061a9bebadc9090b607e58a012634b901e65\" returns successfully"
	
	* 
	* ==> describe nodes <==
	* Name:               embed-certs-20220601110327-6708
	Roles:              control-plane,master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-20220601110327-6708
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=4a356b2b7b41c6be3e1e342298908c27bb98ce92
	                    minikube.k8s.io/name=embed-certs-20220601110327-6708
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2022_06_01T11_03_56_0700
	                    minikube.k8s.io/version=v1.26.0-beta.1
	                    node-role.kubernetes.io/control-plane=
	                    node-role.kubernetes.io/master=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 01 Jun 2022 11:03:52 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-20220601110327-6708
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 01 Jun 2022 11:16:03 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 01 Jun 2022 11:14:13 +0000   Wed, 01 Jun 2022 11:03:49 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 01 Jun 2022 11:14:13 +0000   Wed, 01 Jun 2022 11:03:49 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 01 Jun 2022 11:14:13 +0000   Wed, 01 Jun 2022 11:03:49 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Wed, 01 Jun 2022 11:14:13 +0000   Wed, 01 Jun 2022 11:03:49 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    embed-certs-20220601110327-6708
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304695084Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32873824Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304695084Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32873824Ki
	  pods:               110
	System Info:
	  Machine ID:                 e0d7477b601740b2a7c32c13851e505c
	  System UUID:                d600b159-ea34-4ea3-ab62-e86c595f06ef
	  Boot ID:                    6ec46557-2d76-4d83-8353-3ee04fd961c4
	  Kernel Version:             5.13.0-1027-gcp
	  OS Image:                   Ubuntu 20.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.6.4
	  Kubelet Version:            v1.23.6
	  Kube-Proxy Version:         v1.23.6
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                       ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-embed-certs-20220601110327-6708                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (0%!)(MISSING)       0 (0%!)(MISSING)         12m
	  kube-system                 kindnet-92tfl                                              100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      12m
	  kube-system                 kube-apiserver-embed-certs-20220601110327-6708             250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-controller-manager-embed-certs-20220601110327-6708    200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-proxy-99lsz                                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-scheduler-embed-certs-20220601110327-6708             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%!)(MISSING)   100m (1%!)(MISSING)
	  memory             150Mi (0%!)(MISSING)  50Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From        Message
	  ----    ------                   ----               ----        -------
	  Normal  Starting                 12m                kube-proxy  
	  Normal  NodeHasSufficientMemory  12m (x5 over 12m)  kubelet     Node embed-certs-20220601110327-6708 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12m (x4 over 12m)  kubelet     Node embed-certs-20220601110327-6708 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12m (x4 over 12m)  kubelet     Node embed-certs-20220601110327-6708 status is now: NodeHasSufficientPID
	  Normal  Starting                 12m                kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  12m                kubelet     Node embed-certs-20220601110327-6708 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12m                kubelet     Node embed-certs-20220601110327-6708 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12m                kubelet     Node embed-certs-20220601110327-6708 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  12m                kubelet     Updated Node Allocatable limit across pods
	
	* 
	* ==> dmesg <==
	* [Jun 1 10:57] IPv4: martian source 10.244.0.2 from 10.244.0.2, on dev bridge
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff ce 23 50 3b c9 fd 08 06
	[  +0.595787] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ce 23 50 3b c9 fd 08 06
	[  +0.586201] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 66 8a d2 ee 2a 9a 08 06
	[Jun 1 10:58] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 9a c7 83 3b 8c 1d 08 06
	[  +0.000302] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 66 8a d2 ee 2a 9a 08 06
	[ +12.725641] IPv4: martian source 10.244.0.2 from 10.244.0.2, on dev bridge
	[  +0.000013] ll header: 00000000: ff ff ff ff ff ff 3a 82 da d0 8c 8d 08 06
	[  +0.906380] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 3a 82 da d0 8c 8d 08 06
	[  +0.302806] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff aa d1 05 1a ef 66 08 06
	[ +13.606547] IPv4: martian source 10.85.0.2 from 10.85.0.2, on dev cni0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 7e ad f3 21 27 5b 08 06
	[  +8.626375] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff f6 c1 ef be 01 dd 08 06
	[  +0.000348] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff aa d1 05 1a ef 66 08 06
	[  +8.278909] IPv4: martian source 10.85.0.2 from 10.85.0.2, on dev cni0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 72 fb cd eb 09 11 08 06
	[Jun 1 11:01] IPv4: martian destination 127.0.0.11 from 10.244.0.2, dev vethb04b9442
	
	* 
	* ==> etcd [b8dd730d917c46f1998a84680d8f66c7fe1a671f92381a72b93c7e59652409f2] <==
	* {"level":"info","ts":"2022-06-01T11:03:50.084Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2022-06-01T11:03:50.084Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became leader at term 2"}
	{"level":"info","ts":"2022-06-01T11:03:50.084Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2022-06-01T11:03:50.084Z","caller":"etcdserver/server.go:2027","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:embed-certs-20220601110327-6708 ClientURLs:[https://192.168.76.2:2379]}","request-path":"/0/members/ea7e25599daad906/attributes","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2022-06-01T11:03:50.084Z","caller":"etcdserver/server.go:2476","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2022-06-01T11:03:50.084Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-06-01T11:03:50.084Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-06-01T11:03:50.085Z","caller":"etcdmain/main.go:47","msg":"notifying init daemon"}
	{"level":"info","ts":"2022-06-01T11:03:50.085Z","caller":"etcdmain/main.go:53","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2022-06-01T11:03:50.085Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","cluster-version":"3.5"}
	{"level":"info","ts":"2022-06-01T11:03:50.085Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2022-06-01T11:03:50.085Z","caller":"etcdserver/server.go:2500","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2022-06-01T11:03:50.086Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.76.2:2379"}
	{"level":"info","ts":"2022-06-01T11:03:50.086Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2022-06-01T11:06:59.331Z","caller":"traceutil/trace.go:171","msg":"trace[993403062] linearizableReadLoop","detail":"{readStateIndex:565; appliedIndex:565; }","duration":"164.749443ms","start":"2022-06-01T11:06:59.166Z","end":"2022-06-01T11:06:59.331Z","steps":["trace[993403062] 'read index received'  (duration: 164.741295ms)","trace[993403062] 'applied index is now lower than readState.Index'  (duration: 7.261µs)"],"step_count":2}
	{"level":"warn","ts":"2022-06-01T11:06:59.332Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"166.049774ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/embed-certs-20220601110327-6708\" ","response":"range_response_count:1 size:4776"}
	{"level":"info","ts":"2022-06-01T11:06:59.332Z","caller":"traceutil/trace.go:171","msg":"trace[243859244] range","detail":"{range_begin:/registry/minions/embed-certs-20220601110327-6708; range_end:; response_count:1; response_revision:516; }","duration":"166.144768ms","start":"2022-06-01T11:06:59.166Z","end":"2022-06-01T11:06:59.332Z","steps":["trace[243859244] 'agreement among raft nodes before linearized reading'  (duration: 164.864212ms)"],"step_count":1}
	{"level":"info","ts":"2022-06-01T11:13:50.105Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":546}
	{"level":"info","ts":"2022-06-01T11:13:50.106Z","caller":"mvcc/kvstore_compaction.go:57","msg":"finished scheduled compaction","compact-revision":546,"took":"429.021µs"}
	{"level":"warn","ts":"2022-06-01T11:14:26.753Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"233.709122ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/events/default/busybox.16f47a88b4a70945\" ","response":"range_response_count:1 size:676"}
	{"level":"info","ts":"2022-06-01T11:14:26.753Z","caller":"traceutil/trace.go:171","msg":"trace[840453810] range","detail":"{range_begin:/registry/events/default/busybox.16f47a88b4a70945; range_end:; response_count:1; response_revision:652; }","duration":"233.80507ms","start":"2022-06-01T11:14:26.520Z","end":"2022-06-01T11:14:26.753Z","steps":["trace[840453810] 'agreement among raft nodes before linearized reading'  (duration: 46.352891ms)","trace[840453810] 'range keys from in-memory index tree'  (duration: 187.31702ms)"],"step_count":2}
	{"level":"warn","ts":"2022-06-01T11:14:26.753Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"226.054261ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/mutatingwebhookconfigurations/\" range_end:\"/registry/mutatingwebhookconfigurations0\" count_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2022-06-01T11:14:26.753Z","caller":"traceutil/trace.go:171","msg":"trace[91545500] range","detail":"{range_begin:/registry/mutatingwebhookconfigurations/; range_end:/registry/mutatingwebhookconfigurations0; response_count:0; response_revision:652; }","duration":"226.231448ms","start":"2022-06-01T11:14:26.527Z","end":"2022-06-01T11:14:26.753Z","steps":["trace[91545500] 'agreement among raft nodes before linearized reading'  (duration: 38.693159ms)","trace[91545500] 'count revisions from in-memory index tree'  (duration: 187.331729ms)"],"step_count":2}
	{"level":"warn","ts":"2022-06-01T11:14:27.037Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"181.740284ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/events/kube-system/storage-provisioner.16f47a507eb9e79b\" ","response":"range_response_count:1 size:724"}
	{"level":"info","ts":"2022-06-01T11:14:27.037Z","caller":"traceutil/trace.go:171","msg":"trace[695362410] range","detail":"{range_begin:/registry/events/kube-system/storage-provisioner.16f47a507eb9e79b; range_end:; response_count:1; response_revision:653; }","duration":"181.817862ms","start":"2022-06-01T11:14:26.856Z","end":"2022-06-01T11:14:27.037Z","steps":["trace[695362410] 'agreement among raft nodes before linearized reading'  (duration: 83.115698ms)","trace[695362410] 'range keys from in-memory index tree'  (duration: 98.584798ms)"],"step_count":2}
	
	* 
	* ==> kernel <==
	*  11:16:12 up 58 min,  0 users,  load average: 3.45, 3.45, 2.45
	Linux embed-certs-20220601110327-6708 5.13.0-1027-gcp #32~20.04.1-Ubuntu SMP Thu May 26 10:53:08 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.4 LTS"
	
	* 
	* ==> kube-apiserver [d49ab0e8a34f4fc8f9cd1e7a8cba837be3548801379cde0d444aadf2a833b32a] <==
	* I0601 11:03:52.353146       1 shared_informer.go:247] Caches are synced for crd-autoregister 
	I0601 11:03:52.353180       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0601 11:03:52.353221       1 shared_informer.go:247] Caches are synced for cluster_authentication_trust_controller 
	I0601 11:03:52.353225       1 cache.go:39] Caches are synced for autoregister controller
	I0601 11:03:52.353496       1 apf_controller.go:322] Running API Priority and Fairness config worker
	I0601 11:03:52.354371       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0601 11:03:53.223928       1 controller.go:132] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I0601 11:03:53.230007       1 storage_scheduling.go:93] created PriorityClass system-node-critical with value 2000001000
	I0601 11:03:53.232751       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0601 11:03:53.233028       1 storage_scheduling.go:93] created PriorityClass system-cluster-critical with value 2000000000
	I0601 11:03:53.233046       1 storage_scheduling.go:109] all system priority classes are created successfully or already exist.
	I0601 11:03:53.654795       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0601 11:03:53.685311       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0601 11:03:53.775744       1 alloc.go:329] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1]
	W0601 11:03:53.783710       1 lease.go:233] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I0601 11:03:53.784644       1 controller.go:611] quota admission added evaluator for: endpoints
	I0601 11:03:53.788004       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0601 11:03:54.362824       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0601 11:03:55.411558       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0601 11:03:55.418495       1 alloc.go:329] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I0601 11:03:55.427653       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0601 11:04:00.570330       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0601 11:04:08.019838       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	I0601 11:04:08.117524       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	I0601 11:04:08.961758       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	
	* 
	* ==> kube-controller-manager [c32cb0a91408a09b3c11ff34a26cefeb792698bb4386b00d3bf469632b43a1d0] <==
	* I0601 11:04:07.415430       1 shared_informer.go:247] Caches are synced for ephemeral 
	I0601 11:04:07.415463       1 shared_informer.go:247] Caches are synced for endpoint_slice 
	I0601 11:04:07.418512       1 shared_informer.go:247] Caches are synced for resource quota 
	I0601 11:04:07.461867       1 shared_informer.go:247] Caches are synced for stateful set 
	I0601 11:04:07.464497       1 shared_informer.go:247] Caches are synced for taint 
	I0601 11:04:07.464573       1 node_lifecycle_controller.go:1397] Initializing eviction metric for zone: 
	I0601 11:04:07.464636       1 taint_manager.go:187] "Starting NoExecuteTaintManager"
	I0601 11:04:07.464710       1 event.go:294] "Event occurred" object="embed-certs-20220601110327-6708" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node embed-certs-20220601110327-6708 event: Registered Node embed-certs-20220601110327-6708 in Controller"
	W0601 11:04:07.464641       1 node_lifecycle_controller.go:1012] Missing timestamp for Node embed-certs-20220601110327-6708. Assuming now as a timestamp.
	I0601 11:04:07.464736       1 shared_informer.go:247] Caches are synced for GC 
	I0601 11:04:07.464789       1 node_lifecycle_controller.go:1163] Controller detected that all Nodes are not-Ready. Entering master disruption mode.
	I0601 11:04:07.464794       1 shared_informer.go:247] Caches are synced for ReplicationController 
	I0601 11:04:07.465561       1 shared_informer.go:247] Caches are synced for attach detach 
	I0601 11:04:07.466207       1 shared_informer.go:247] Caches are synced for daemon sets 
	I0601 11:04:07.466846       1 shared_informer.go:247] Caches are synced for ReplicaSet 
	I0601 11:04:07.844776       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0601 11:04:07.860043       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0601 11:04:07.860076       1 garbagecollector.go:155] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0601 11:04:08.021708       1 event.go:294] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-64897985d to 2"
	I0601 11:04:08.045076       1 event.go:294] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-64897985d to 1"
	I0601 11:04:08.122691       1 event.go:294] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-99lsz"
	I0601 11:04:08.125091       1 event.go:294] "Event occurred" object="kube-system/kindnet" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-92tfl"
	I0601 11:04:08.220606       1 event.go:294] "Event occurred" object="kube-system/coredns-64897985d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-64897985d-2ms6r"
	I0601 11:04:08.226533       1 event.go:294] "Event occurred" object="kube-system/coredns-64897985d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-64897985d-9dpfv"
	I0601 11:04:08.241748       1 event.go:294] "Event occurred" object="kube-system/coredns-64897985d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-64897985d-2ms6r"
	
	* 
	* ==> kube-proxy [4a8be0c7cfc5357e854ba7e1f8d7f90fdbacf08e294eb70b849c4501eb4b32b6] <==
	* I0601 11:04:08.785335       1 node.go:163] Successfully retrieved node IP: 192.168.76.2
	I0601 11:04:08.785408       1 server_others.go:138] "Detected node IP" address="192.168.76.2"
	I0601 11:04:08.785447       1 server_others.go:561] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0601 11:04:08.956676       1 server_others.go:206] "Using iptables Proxier"
	I0601 11:04:08.957522       1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0601 11:04:08.957544       1 server_others.go:214] "Creating dualStackProxier for iptables"
	I0601 11:04:08.957576       1 server_others.go:491] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
	I0601 11:04:08.958014       1 server.go:656] "Version info" version="v1.23.6"
	I0601 11:04:08.958572       1 config.go:317] "Starting service config controller"
	I0601 11:04:08.958596       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0601 11:04:08.959266       1 config.go:226] "Starting endpoint slice config controller"
	I0601 11:04:08.959287       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0601 11:04:09.058849       1 shared_informer.go:247] Caches are synced for service config 
	I0601 11:04:09.059356       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	
	* 
	* ==> kube-scheduler [a985029383eb2ce5944970221a3e1c9b33b522c0c97a0fb29c0b4c260cbf6a2f] <==
	* W0601 11:03:52.358283       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0601 11:03:52.358434       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0601 11:03:52.358594       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0601 11:03:52.358832       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0601 11:03:52.358601       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0601 11:03:52.358891       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0601 11:03:52.358710       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0601 11:03:52.358916       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0601 11:03:53.226960       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0601 11:03:53.227001       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0601 11:03:53.235048       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0601 11:03:53.235096       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0601 11:03:53.321811       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0601 11:03:53.321848       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0601 11:03:53.385122       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0601 11:03:53.385163       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0601 11:03:53.405212       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0601 11:03:53.405259       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0601 11:03:53.455747       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0601 11:03:53.455790       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0601 11:03:53.455746       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0601 11:03:53.455816       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0601 11:03:53.557775       1 reflector.go:324] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0601 11:03:53.557818       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0601 11:03:55.783702       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Wed 2022-06-01 11:03:36 UTC, end at Wed 2022-06-01 11:16:12 UTC. --
	Jun 01 11:14:55 embed-certs-20220601110327-6708 kubelet[1320]: E0601 11:14:55.935198    1320 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jun 01 11:15:00 embed-certs-20220601110327-6708 kubelet[1320]: E0601 11:15:00.935843    1320 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jun 01 11:15:05 embed-certs-20220601110327-6708 kubelet[1320]: E0601 11:15:05.936504    1320 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jun 01 11:15:10 embed-certs-20220601110327-6708 kubelet[1320]: E0601 11:15:10.937622    1320 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jun 01 11:15:15 embed-certs-20220601110327-6708 kubelet[1320]: E0601 11:15:15.938561    1320 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jun 01 11:15:20 embed-certs-20220601110327-6708 kubelet[1320]: E0601 11:15:20.939399    1320 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jun 01 11:15:25 embed-certs-20220601110327-6708 kubelet[1320]: E0601 11:15:25.940578    1320 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jun 01 11:15:30 embed-certs-20220601110327-6708 kubelet[1320]: E0601 11:15:30.941588    1320 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jun 01 11:15:32 embed-certs-20220601110327-6708 kubelet[1320]: I0601 11:15:32.984806    1320 scope.go:110] "RemoveContainer" containerID="6adba67c52be0448acfaf806e89c061a9bebadc9090b607e58a012634b901e65"
	Jun 01 11:15:32 embed-certs-20220601110327-6708 kubelet[1320]: I0601 11:15:32.985130    1320 scope.go:110] "RemoveContainer" containerID="f1db42bbd17faeafd4cd3be2854a922ebf474bf60a0d563922769dd22d828187"
	Jun 01 11:15:32 embed-certs-20220601110327-6708 kubelet[1320]: E0601 11:15:32.985444    1320 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kindnet-cni pod=kindnet-92tfl_kube-system(1e2e52a8-4f89-49af-9741-f79384628a29)\"" pod="kube-system/kindnet-92tfl" podUID=1e2e52a8-4f89-49af-9741-f79384628a29
	Jun 01 11:15:35 embed-certs-20220601110327-6708 kubelet[1320]: E0601 11:15:35.942217    1320 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jun 01 11:15:40 embed-certs-20220601110327-6708 kubelet[1320]: E0601 11:15:40.943213    1320 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jun 01 11:15:45 embed-certs-20220601110327-6708 kubelet[1320]: I0601 11:15:45.688928    1320 scope.go:110] "RemoveContainer" containerID="f1db42bbd17faeafd4cd3be2854a922ebf474bf60a0d563922769dd22d828187"
	Jun 01 11:15:45 embed-certs-20220601110327-6708 kubelet[1320]: E0601 11:15:45.689200    1320 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kindnet-cni pod=kindnet-92tfl_kube-system(1e2e52a8-4f89-49af-9741-f79384628a29)\"" pod="kube-system/kindnet-92tfl" podUID=1e2e52a8-4f89-49af-9741-f79384628a29
	Jun 01 11:15:45 embed-certs-20220601110327-6708 kubelet[1320]: E0601 11:15:45.944161    1320 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jun 01 11:15:50 embed-certs-20220601110327-6708 kubelet[1320]: E0601 11:15:50.945767    1320 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jun 01 11:15:55 embed-certs-20220601110327-6708 kubelet[1320]: E0601 11:15:55.947408    1320 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jun 01 11:15:56 embed-certs-20220601110327-6708 kubelet[1320]: I0601 11:15:56.688789    1320 scope.go:110] "RemoveContainer" containerID="f1db42bbd17faeafd4cd3be2854a922ebf474bf60a0d563922769dd22d828187"
	Jun 01 11:15:56 embed-certs-20220601110327-6708 kubelet[1320]: E0601 11:15:56.689204    1320 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kindnet-cni pod=kindnet-92tfl_kube-system(1e2e52a8-4f89-49af-9741-f79384628a29)\"" pod="kube-system/kindnet-92tfl" podUID=1e2e52a8-4f89-49af-9741-f79384628a29
	Jun 01 11:16:00 embed-certs-20220601110327-6708 kubelet[1320]: E0601 11:16:00.948598    1320 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jun 01 11:16:05 embed-certs-20220601110327-6708 kubelet[1320]: E0601 11:16:05.949621    1320 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jun 01 11:16:07 embed-certs-20220601110327-6708 kubelet[1320]: I0601 11:16:07.688654    1320 scope.go:110] "RemoveContainer" containerID="f1db42bbd17faeafd4cd3be2854a922ebf474bf60a0d563922769dd22d828187"
	Jun 01 11:16:07 embed-certs-20220601110327-6708 kubelet[1320]: E0601 11:16:07.688944    1320 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kindnet-cni pod=kindnet-92tfl_kube-system(1e2e52a8-4f89-49af-9741-f79384628a29)\"" pod="kube-system/kindnet-92tfl" podUID=1e2e52a8-4f89-49af-9741-f79384628a29
	Jun 01 11:16:10 embed-certs-20220601110327-6708 kubelet[1320]: E0601 11:16:10.951264    1320 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-20220601110327-6708 -n embed-certs-20220601110327-6708
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-20220601110327-6708 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:270: non-running pods: busybox coredns-64897985d-9dpfv storage-provisioner
helpers_test.go:272: ======> post-mortem[TestStartStop/group/embed-certs/serial/DeployApp]: describe non-running pods <======
helpers_test.go:275: (dbg) Run:  kubectl --context embed-certs-20220601110327-6708 describe pod busybox coredns-64897985d-9dpfv storage-provisioner
helpers_test.go:275: (dbg) Non-zero exit: kubectl --context embed-certs-20220601110327-6708 describe pod busybox coredns-64897985d-9dpfv storage-provisioner: exit status 1 (60.47421ms)

                                                
                                                
-- stdout --
	Name:         busybox
	Namespace:    default
	Priority:     0
	Node:         <none>
	Labels:       integration-test=busybox
	Annotations:  <none>
	Status:       Pending
	IP:           
	IPs:          <none>
	Containers:
	  busybox:
	    Image:      gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sleep
	      3600
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-wgcrb (ro)
	Conditions:
	  Type           Status
	  PodScheduled   False 
	Volumes:
	  kube-api-access-wgcrb:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason            Age                 From               Message
	  ----     ------            ----                ----               -------
	  Warning  FailedScheduling  47s (x8 over 8m3s)  default-scheduler  0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "coredns-64897985d-9dpfv" not found
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:277: kubectl --context embed-certs-20220601110327-6708 describe pod busybox coredns-64897985d-9dpfv storage-provisioner: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/embed-certs/serial/DeployApp]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect embed-certs-20220601110327-6708
helpers_test.go:235: (dbg) docker inspect embed-certs-20220601110327-6708:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "b77a5d5e61bfa6e31aa165e07cef7da486c7219a464787e9c662bc91861f785d",
	        "Created": "2022-06-01T11:03:36.104826313Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 232853,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-06-01T11:03:36.476018297Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:5fc9565d342f677dd8987c0c7656d8d58147ab45c932c9076935b38a770e4cf7",
	        "ResolvConfPath": "/var/lib/docker/containers/b77a5d5e61bfa6e31aa165e07cef7da486c7219a464787e9c662bc91861f785d/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/b77a5d5e61bfa6e31aa165e07cef7da486c7219a464787e9c662bc91861f785d/hostname",
	        "HostsPath": "/var/lib/docker/containers/b77a5d5e61bfa6e31aa165e07cef7da486c7219a464787e9c662bc91861f785d/hosts",
	        "LogPath": "/var/lib/docker/containers/b77a5d5e61bfa6e31aa165e07cef7da486c7219a464787e9c662bc91861f785d/b77a5d5e61bfa6e31aa165e07cef7da486c7219a464787e9c662bc91861f785d-json.log",
	        "Name": "/embed-certs-20220601110327-6708",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-20220601110327-6708:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "embed-certs-20220601110327-6708",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/af793a78707e49005a668827ee182b67b74ca83491cbfb43256e792a6be931d8-init/diff:/var/lib/docker/overlay2/b8d8db1a2aa7e68bc30cc8784920412f67de75d287f26f41df14fe77e9cd01aa/diff:/var/lib/docker/overlay2/e05914bc392f34941f075ddf1f08984ba653e41980a3a12b3f0ec7bc66faed2c/diff:/var/lib/docker/overlay2/311290be1e68cfaf248422398f2ee037b662e83e8a80c393cf1f35af46c871e3/diff:/var/lib/docker/overlay2/df5bd86ed0e0c9740e1a389f0dd435837b730043ff123ee198aa28f82511f21c/diff:/var/lib/docker/overlay2/52ebf0baed5a1f4378d84fd6b94c9c1756fef7c7837d0766f77ef29b2104f39c/diff:/var/lib/docker/overlay2/19fdc64f1948c51309fb85761a84e7787a06f91f43e3d554c0507c5291bda802/diff:/var/lib/docker/overlay2/8fbe94a5f5ec7e7b87e3294d917aaba04c0d5a1072a4876ca3171f7b416e78d1/diff:/var/lib/docker/overlay2/3f2aec9d3b6e64c9251bd84e36639908a944a57741768984d5a66f5f34ed2288/diff:/var/lib/docker/overlay2/f4b59ae0a52d88a0325a281689c43f769408e14f70b9c94b771e70af140d7538/diff:/var/lib/docker/overlay2/1b9610
0ef86fc38fba7ce796c89f6a0f0c16c7fc1a94ff1a7f2021a01dd5471e/diff:/var/lib/docker/overlay2/10388fac28961ac0e67628b98602c8c77b04d12b21cd25a8d9adc05c1261252b/diff:/var/lib/docker/overlay2/efcc44ba0e0e6acd23e74db49106211785c2519f22b858252b43b17e927095e4/diff:/var/lib/docker/overlay2/3dbc9b9ec4e689c631fadb88ec67ad1f44f1059642f0d9e9740fa4523681476a/diff:/var/lib/docker/overlay2/196519dbdfaceb51fe77bd0517b4a726c63133e2f1a44cc71539d859f920996f/diff:/var/lib/docker/overlay2/bf269014ddb2ded291bc1f7a299a168567e8ffb016d7d4ba5ad3681eb1c988ba/diff:/var/lib/docker/overlay2/dc3a1c86dd14e2cba77aa4a9424d61aa34efc817c2c14b8f79d66fb1c19132ea/diff:/var/lib/docker/overlay2/7151cfa81a30a8e2bacf0e7e97eb21e825844e9ee99d4d774c443cf4df3042bf/diff:/var/lib/docker/overlay2/4827c7b9079e215b353e51dd539d7e28bf73341ea0a494df4654e9fd1b53d16c/diff:/var/lib/docker/overlay2/2da0eda04306aaeacd19a5cc85b885230b88e74f1791bdb022ebf4b39d85fae2/diff:/var/lib/docker/overlay2/1a0bdf8971fb0a44ff0e168623dbbb2d84b8e93f6d20d9351efd68470e4d4851/diff:/var/lib/d
ocker/overlay2/9ced6f582fc4ce00064f1f5e6ac922d838648fe94afc8e015c309e04004f10ca/diff:/var/lib/docker/overlay2/dd6d4c3166eb565aff2516953d5a8877a204214f2436d414475132ae70429cf7/diff:/var/lib/docker/overlay2/a1ace060e85891d54b26ff4be9fcbce36ffbb15cc4061eb4ccf0add8f82783df/diff:/var/lib/docker/overlay2/bc8b93bfba93e7da2c573ae8b6405ebff526153f6a8b0659aebaf044dc7e8f43/diff:/var/lib/docker/overlay2/c6292624b658b5761ddc277e4b50f1bd9d32fb9a2ad4a01647d6482fa0d29eb3/diff:/var/lib/docker/overlay2/cfe8f35eeb0a80a3c747eac0dfd9195da1fa3d9f92f0b7866d46f3517f3b10ee/diff:/var/lib/docker/overlay2/90bc3b9378b5761de7575f1a82d48e4b4ebf50af153eafa5a565585e136b87f8/diff:/var/lib/docker/overlay2/e1b928a2483870df7fdf4adb3b4002f9effe1db7fbff925b24005f47290b2915/diff:/var/lib/docker/overlay2/4758c75ab63fd3f43ae7b654bc93566ded69acea0e92caf3043ef6eeeec9ca1b/diff:/var/lib/docker/overlay2/8558da5809877030d87982052e5011fb04b8bb9646d0f3c1d4aa10f2d7926592/diff:/var/lib/docker/overlay2/f6400cfae811f55736f55064c8949e18ac2dc1175a5149bb0382a79fa92
4f3f3/diff:/var/lib/docker/overlay2/875d5278ff8445b84261d4586c6a14fbbd9c13beff1fe9252591162e4d91a0dc/diff:/var/lib/docker/overlay2/12d9f85a229e1386e37fb609020fdcb5535c67ce811012fd646182559e4ee754/diff:/var/lib/docker/overlay2/d5c5b85272d7a8b0b62da488c8cba8d883a0adcfc9f1b2f6ad2f856b4f13e5f7/diff:/var/lib/docker/overlay2/5f6feb9e059c22491e287804f38f097fda984dc9376ba28ae810e13dcf27394f/diff:/var/lib/docker/overlay2/113a715b1135f09b959295e960aeaa36846ad54a6fe46fdd53d061bc3fe114e3/diff:/var/lib/docker/overlay2/265096a57a98130b8073aa41d0a22f0ada5a391943e490ac1c281a634e22cba0/diff:/var/lib/docker/overlay2/15f57e6dc9a6b382240a9335aae067454048b2eb6d0094f2d3c8c115be34311a/diff:/var/lib/docker/overlay2/45ca8d44d46a41b4493f52c0048d08b8d5ff4c1b28233ab8940f6422df1f1273/diff:/var/lib/docker/overlay2/d555005a13c251ef928389a1b06d251a17378cf3ec68af5a4d6c849412d3f69f/diff:/var/lib/docker/overlay2/80b8c06850dacfe2ca4db4e0fa4d2d0dd6997cf495a7f7da9b95a996694144c5/diff",
	                "MergedDir": "/var/lib/docker/overlay2/af793a78707e49005a668827ee182b67b74ca83491cbfb43256e792a6be931d8/merged",
	                "UpperDir": "/var/lib/docker/overlay2/af793a78707e49005a668827ee182b67b74ca83491cbfb43256e792a6be931d8/diff",
	                "WorkDir": "/var/lib/docker/overlay2/af793a78707e49005a668827ee182b67b74ca83491cbfb43256e792a6be931d8/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-20220601110327-6708",
	                "Source": "/var/lib/docker/volumes/embed-certs-20220601110327-6708/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-20220601110327-6708",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-20220601110327-6708",
	                "name.minikube.sigs.k8s.io": "embed-certs-20220601110327-6708",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "6e07617b2a6be7f1d7fcd4f72c38164dc41010e13179d5f3d71f30078705fa21",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49412"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49411"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49408"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49410"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49409"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/6e07617b2a6b",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-20220601110327-6708": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "b77a5d5e61bf",
	                        "embed-certs-20220601110327-6708"
	                    ],
	                    "NetworkID": "85c31b5e416e869b4ae1612c11e4fd39718a187a5009c211794c61313cb0c682",
	                    "EndpointID": "8df55589072b1e0d65a42a89f9b0e4d5153d5de972481a98d522d287ef34389c",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-20220601110327-6708 -n embed-certs-20220601110327-6708
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/DeployApp FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/DeployApp]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-20220601110327-6708 logs -n 25
helpers_test.go:252: TestStartStop/group/embed-certs/serial/DeployApp logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------------------------------|------------------------------------------------|---------|----------------|---------------------|---------------------|
	| Command |                            Args                            |                    Profile                     |  User   |    Version     |     Start Time      |      End Time       |
	|---------|------------------------------------------------------------|------------------------------------------------|---------|----------------|---------------------|---------------------|
	| pause   | -p                                                         | no-preload-20220601105939-6708                 | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:06 UTC | 01 Jun 22 11:06 UTC |
	|         | no-preload-20220601105939-6708                             |                                                |         |                |                     |                     |
	|         | --alsologtostderr -v=1                                     |                                                |         |                |                     |                     |
	| unpause | -p                                                         | no-preload-20220601105939-6708                 | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:06 UTC | 01 Jun 22 11:06 UTC |
	|         | no-preload-20220601105939-6708                             |                                                |         |                |                     |                     |
	|         | --alsologtostderr -v=1                                     |                                                |         |                |                     |                     |
	| delete  | -p                                                         | no-preload-20220601105939-6708                 | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:06 UTC | 01 Jun 22 11:06 UTC |
	|         | no-preload-20220601105939-6708                             |                                                |         |                |                     |                     |
	| delete  | -p                                                         | no-preload-20220601105939-6708                 | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:06 UTC | 01 Jun 22 11:06 UTC |
	|         | no-preload-20220601105939-6708                             |                                                |         |                |                     |                     |
	| delete  | -p                                                         | disable-driver-mounts-20220601110654-6708      | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:06 UTC | 01 Jun 22 11:06 UTC |
	|         | disable-driver-mounts-20220601110654-6708                  |                                                |         |                |                     |                     |
	| logs    | embed-certs-20220601110327-6708                            | embed-certs-20220601110327-6708                | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:08 UTC | 01 Jun 22 11:08 UTC |
	|         | logs -n 25                                                 |                                                |         |                |                     |                     |
	| logs    | default-k8s-different-port-20220601110654-6708             | default-k8s-different-port-20220601110654-6708 | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:11 UTC | 01 Jun 22 11:11 UTC |
	|         | logs -n 25                                                 |                                                |         |                |                     |                     |
	| logs    | old-k8s-version-20220601105850-6708                        | old-k8s-version-20220601105850-6708            | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:11 UTC | 01 Jun 22 11:11 UTC |
	|         | logs -n 25                                                 |                                                |         |                |                     |                     |
	| logs    | old-k8s-version-20220601105850-6708                        | old-k8s-version-20220601105850-6708            | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:11 UTC | 01 Jun 22 11:11 UTC |
	|         | logs -n 25                                                 |                                                |         |                |                     |                     |
	| addons  | enable metrics-server -p                                   | old-k8s-version-20220601105850-6708            | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:11 UTC | 01 Jun 22 11:11 UTC |
	|         | old-k8s-version-20220601105850-6708                        |                                                |         |                |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                                                |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                     |                                                |         |                |                     |                     |
	| stop    | -p                                                         | old-k8s-version-20220601105850-6708            | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:11 UTC | 01 Jun 22 11:11 UTC |
	|         | old-k8s-version-20220601105850-6708                        |                                                |         |                |                     |                     |
	|         | --alsologtostderr -v=3                                     |                                                |         |                |                     |                     |
	| addons  | enable dashboard -p                                        | old-k8s-version-20220601105850-6708            | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:11 UTC | 01 Jun 22 11:11 UTC |
	|         | old-k8s-version-20220601105850-6708                        |                                                |         |                |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                |         |                |                     |                     |
	| logs    | calico-20220601104839-6708                                 | calico-20220601104839-6708                     | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:14 UTC | 01 Jun 22 11:14 UTC |
	|         | logs -n 25                                                 |                                                |         |                |                     |                     |
	| delete  | -p calico-20220601104839-6708                              | calico-20220601104839-6708                     | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:14 UTC | 01 Jun 22 11:14 UTC |
	| start   | -p newest-cni-20220601111420-6708 --memory=2200            | newest-cni-20220601111420-6708                 | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:14 UTC | 01 Jun 22 11:15 UTC |
	|         | --alsologtostderr --wait=apiserver,system_pods,default_sa  |                                                |         |                |                     |                     |
	|         | --feature-gates ServerSideApply=true --network-plugin=cni  |                                                |         |                |                     |                     |
	|         | --extra-config=kubelet.network-plugin=cni                  |                                                |         |                |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |                                                |         |                |                     |                     |
	|         | --driver=docker  --container-runtime=containerd            |                                                |         |                |                     |                     |
	|         | --kubernetes-version=v1.23.6                               |                                                |         |                |                     |                     |
	| addons  | enable metrics-server -p                                   | newest-cni-20220601111420-6708                 | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:15 UTC | 01 Jun 22 11:15 UTC |
	|         | newest-cni-20220601111420-6708                             |                                                |         |                |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                                                |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                     |                                                |         |                |                     |                     |
	| stop    | -p                                                         | newest-cni-20220601111420-6708                 | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:15 UTC | 01 Jun 22 11:15 UTC |
	|         | newest-cni-20220601111420-6708                             |                                                |         |                |                     |                     |
	|         | --alsologtostderr -v=3                                     |                                                |         |                |                     |                     |
	| addons  | enable dashboard -p                                        | newest-cni-20220601111420-6708                 | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:15 UTC | 01 Jun 22 11:15 UTC |
	|         | newest-cni-20220601111420-6708                             |                                                |         |                |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                |         |                |                     |                     |
	| start   | -p newest-cni-20220601111420-6708 --memory=2200            | newest-cni-20220601111420-6708                 | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:15 UTC | 01 Jun 22 11:15 UTC |
	|         | --alsologtostderr --wait=apiserver,system_pods,default_sa  |                                                |         |                |                     |                     |
	|         | --feature-gates ServerSideApply=true --network-plugin=cni  |                                                |         |                |                     |                     |
	|         | --extra-config=kubelet.network-plugin=cni                  |                                                |         |                |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |                                                |         |                |                     |                     |
	|         | --driver=docker  --container-runtime=containerd            |                                                |         |                |                     |                     |
	|         | --kubernetes-version=v1.23.6                               |                                                |         |                |                     |                     |
	| ssh     | -p                                                         | newest-cni-20220601111420-6708                 | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:15 UTC | 01 Jun 22 11:15 UTC |
	|         | newest-cni-20220601111420-6708                             |                                                |         |                |                     |                     |
	|         | sudo crictl images -o json                                 |                                                |         |                |                     |                     |
	| pause   | -p                                                         | newest-cni-20220601111420-6708                 | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:15 UTC | 01 Jun 22 11:15 UTC |
	|         | newest-cni-20220601111420-6708                             |                                                |         |                |                     |                     |
	|         | --alsologtostderr -v=1                                     |                                                |         |                |                     |                     |
	| unpause | -p                                                         | newest-cni-20220601111420-6708                 | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:15 UTC | 01 Jun 22 11:16 UTC |
	|         | newest-cni-20220601111420-6708                             |                                                |         |                |                     |                     |
	|         | --alsologtostderr -v=1                                     |                                                |         |                |                     |                     |
	| delete  | -p                                                         | newest-cni-20220601111420-6708                 | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:16 UTC | 01 Jun 22 11:16 UTC |
	|         | newest-cni-20220601111420-6708                             |                                                |         |                |                     |                     |
	| delete  | -p                                                         | newest-cni-20220601111420-6708                 | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:16 UTC | 01 Jun 22 11:16 UTC |
	|         | newest-cni-20220601111420-6708                             |                                                |         |                |                     |                     |
	| logs    | embed-certs-20220601110327-6708                            | embed-certs-20220601110327-6708                | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:16 UTC | 01 Jun 22 11:16 UTC |
	|         | logs -n 25                                                 |                                                |         |                |                     |                     |
	|---------|------------------------------------------------------------|------------------------------------------------|---------|----------------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/06/01 11:15:23
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.18.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0601 11:15:23.741784  263941 out.go:296] Setting OutFile to fd 1 ...
	I0601 11:15:23.741991  263941 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0601 11:15:23.742003  263941 out.go:309] Setting ErrFile to fd 2...
	I0601 11:15:23.742008  263941 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0601 11:15:23.742123  263941 root.go:322] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/bin
	I0601 11:15:23.742399  263941 out.go:303] Setting JSON to false
	I0601 11:15:23.744026  263941 start.go:115] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":3478,"bootTime":1654078646,"procs":610,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.13.0-1027-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0601 11:15:23.744098  263941 start.go:125] virtualization: kvm guest
	I0601 11:15:23.746332  263941 out.go:177] * [newest-cni-20220601111420-6708] minikube v1.26.0-beta.1 on Ubuntu 20.04 (kvm/amd64)
	I0601 11:15:23.747824  263941 notify.go:193] Checking for updates...
	I0601 11:15:23.749331  263941 out.go:177]   - MINIKUBE_LOCATION=14079
	I0601 11:15:23.750766  263941 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0601 11:15:23.752196  263941 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig
	I0601 11:15:23.753477  263941 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube
	I0601 11:15:23.754830  263941 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0601 11:15:23.756669  263941 config.go:178] Loaded profile config "newest-cni-20220601111420-6708": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.6
	I0601 11:15:23.757075  263941 driver.go:358] Setting default libvirt URI to qemu:///system
	I0601 11:15:23.796054  263941 docker.go:137] docker version: linux-20.10.16
	I0601 11:15:23.796147  263941 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0601 11:15:23.900733  263941 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:76 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:56 OomKillDisable:true NGoroutines:49 SystemTime:2022-06-01 11:15:23.826843836 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1027-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662795776 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:20.10.16 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0601 11:15:23.900836  263941 docker.go:254] overlay module found
	I0601 11:15:23.902962  263941 out.go:177] * Using the docker driver based on existing profile
	I0601 11:15:23.904197  263941 start.go:284] selected driver: docker
	I0601 11:15:23.904212  263941 start.go:806] validating driver "docker" against &{Name:newest-cni-20220601111420-6708 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:newest-cni-20220601111420-6708 Namespace:de
fault APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:network-plugin Value:cni} {Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16} {Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.5.1@sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAd
donRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0601 11:15:23.904305  263941 start.go:817] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0601 11:15:23.905170  263941 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0601 11:15:24.006067  263941 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:76 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:56 OomKillDisable:true NGoroutines:49 SystemTime:2022-06-01 11:15:23.933847887 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1027-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662795776 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:20.10.16 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0601 11:15:24.006342  263941 start_flags.go:866] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0601 11:15:24.006363  263941 cni.go:95] Creating CNI manager for ""
	I0601 11:15:24.006371  263941 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0601 11:15:24.006390  263941 cni.go:225] auto-setting extra-config to "kubelet.cni-conf-dir=/etc/cni/net.mk"
	I0601 11:15:24.006408  263941 cni.go:230] extra-config set to "kubelet.cni-conf-dir=/etc/cni/net.mk"
	I0601 11:15:24.006417  263941 start_flags.go:306] config:
	{Name:newest-cni-20220601111420-6708 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:newest-cni-20220601111420-6708 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:clust
er.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:network-plugin Value:cni} {Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16} {Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.5.1@sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true
apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0601 11:15:24.009809  263941 out.go:177] * Starting control plane node newest-cni-20220601111420-6708 in cluster newest-cni-20220601111420-6708
	I0601 11:15:24.011231  263941 cache.go:120] Beginning downloading kic base image for docker with containerd
	I0601 11:15:24.012641  263941 out.go:177] * Pulling base image ...
	I0601 11:15:24.013976  263941 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime containerd
	I0601 11:15:24.014011  263941 preload.go:148] Found local preload: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.6-containerd-overlay2-amd64.tar.lz4
	I0601 11:15:24.014027  263941 cache.go:57] Caching tarball of preloaded images
	I0601 11:15:24.014077  263941 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a in local docker daemon
	I0601 11:15:24.014241  263941 preload.go:174] Found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.6-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0601 11:15:24.014256  263941 cache.go:60] Finished verifying existence of preloaded tar for  v1.23.6 on containerd
	I0601 11:15:24.014374  263941 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/newest-cni-20220601111420-6708/config.json ...
	I0601 11:15:24.061222  263941 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a in local docker daemon, skipping pull
	I0601 11:15:24.061244  263941 cache.go:141] gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a exists in daemon, skipping load
	I0601 11:15:24.061258  263941 cache.go:206] Successfully downloaded all kic artifacts
	I0601 11:15:24.061288  263941 start.go:352] acquiring machines lock for newest-cni-20220601111420-6708: {Name:mkca6185bbe40be078b8818f834ed4486ca40c22 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0601 11:15:24.061383  263941 start.go:356] acquired machines lock for "newest-cni-20220601111420-6708" in 68.33µs
	I0601 11:15:24.061400  263941 start.go:94] Skipping create...Using existing machine configuration
	I0601 11:15:24.061408  263941 fix.go:55] fixHost starting: 
	I0601 11:15:24.061613  263941 cli_runner.go:164] Run: docker container inspect newest-cni-20220601111420-6708 --format={{.State.Status}}
	I0601 11:15:24.093924  263941 fix.go:103] recreateIfNeeded on newest-cni-20220601111420-6708: state=Stopped err=<nil>
	W0601 11:15:24.093962  263941 fix.go:129] unexpected machine state, will restart: <nil>
	I0601 11:15:24.097310  263941 out.go:177] * Restarting existing docker container for "newest-cni-20220601111420-6708" ...
	I0601 11:15:25.356061  254820 pod_ready.go:102] pod "coredns-5644d7b6d9-5z28m" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 10:59:44 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:15:27.855425  254820 pod_ready.go:102] pod "coredns-5644d7b6d9-5z28m" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 10:59:44 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:15:24.098716  263941 cli_runner.go:164] Run: docker start newest-cni-20220601111420-6708
	I0601 11:15:24.458451  263941 cli_runner.go:164] Run: docker container inspect newest-cni-20220601111420-6708 --format={{.State.Status}}
	I0601 11:15:24.493425  263941 kic.go:416] container "newest-cni-20220601111420-6708" state is running.
	I0601 11:15:24.493755  263941 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-20220601111420-6708
	I0601 11:15:24.525979  263941 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/newest-cni-20220601111420-6708/config.json ...
	I0601 11:15:24.526198  263941 machine.go:88] provisioning docker machine ...
	I0601 11:15:24.526226  263941 ubuntu.go:169] provisioning hostname "newest-cni-20220601111420-6708"
	I0601 11:15:24.526268  263941 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220601111420-6708
	I0601 11:15:24.560036  263941 main.go:134] libmachine: Using SSH client type: native
	I0601 11:15:24.560215  263941 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7da240] 0x7dd2a0 <nil>  [] 0s} 127.0.0.1 49432 <nil> <nil>}
	I0601 11:15:24.560235  263941 main.go:134] libmachine: About to run SSH command:
	sudo hostname newest-cni-20220601111420-6708 && echo "newest-cni-20220601111420-6708" | sudo tee /etc/hostname
	I0601 11:15:24.560843  263941 main.go:134] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:60658->127.0.0.1:49432: read: connection reset by peer
	I0601 11:15:27.680196  263941 main.go:134] libmachine: SSH cmd err, output: <nil>: newest-cni-20220601111420-6708
	
	I0601 11:15:27.680269  263941 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220601111420-6708
	I0601 11:15:27.712212  263941 main.go:134] libmachine: Using SSH client type: native
	I0601 11:15:27.712395  263941 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7da240] 0x7dd2a0 <nil>  [] 0s} 127.0.0.1 49432 <nil> <nil>}
	I0601 11:15:27.712419  263941 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-20220601111420-6708' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-20220601111420-6708/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-20220601111420-6708' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0601 11:15:27.823406  263941 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0601 11:15:27.823434  263941 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/c
erts/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube}
	I0601 11:15:27.823466  263941 ubuntu.go:177] setting up certificates
	I0601 11:15:27.823475  263941 provision.go:83] configureAuth start
	I0601 11:15:27.823525  263941 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-20220601111420-6708
	I0601 11:15:27.855828  263941 provision.go:138] copyHostCerts
	I0601 11:15:27.855913  263941 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/key.pem, removing ...
	I0601 11:15:27.855927  263941 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/key.pem
	I0601 11:15:27.855998  263941 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/key.pem (1679 bytes)
	I0601 11:15:27.856131  263941 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.pem, removing ...
	I0601 11:15:27.856146  263941 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.pem
	I0601 11:15:27.856185  263941 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.pem (1082 bytes)
	I0601 11:15:27.856259  263941 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cert.pem, removing ...
	I0601 11:15:27.856273  263941 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cert.pem
	I0601 11:15:27.856305  263941 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cert.pem (1123 bytes)
	I0601 11:15:27.856384  263941 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca-key.pem org=jenkins.newest-cni-20220601111420-6708 san=[192.168.67.2 127.0.0.1 localhost 127.0.0.1 minikube newest-cni-20220601111420-6708]
	I0601 11:15:27.939200  263941 provision.go:172] copyRemoteCerts
	I0601 11:15:27.939285  263941 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0601 11:15:27.939337  263941 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220601111420-6708
	I0601 11:15:27.970870  263941 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49432 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/newest-cni-20220601111420-6708/id_rsa Username:docker}
	I0601 11:15:28.054991  263941 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0601 11:15:28.072444  263941 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/server.pem --> /etc/docker/server.pem (1265 bytes)
	I0601 11:15:28.089392  263941 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0601 11:15:28.105846  263941 provision.go:86] duration metric: configureAuth took 282.359015ms
	I0601 11:15:28.105871  263941 ubuntu.go:193] setting minikube options for container-runtime
	I0601 11:15:28.106043  263941 config.go:178] Loaded profile config "newest-cni-20220601111420-6708": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.6
	I0601 11:15:28.106055  263941 machine.go:91] provisioned docker machine in 3.579843526s
	I0601 11:15:28.106063  263941 start.go:306] post-start starting for "newest-cni-20220601111420-6708" (driver="docker")
	I0601 11:15:28.106068  263941 start.go:316] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0601 11:15:28.106107  263941 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0601 11:15:28.106147  263941 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220601111420-6708
	I0601 11:15:28.137958  263941 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49432 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/newest-cni-20220601111420-6708/id_rsa Username:docker}
	I0601 11:15:28.223355  263941 ssh_runner.go:195] Run: cat /etc/os-release
	I0601 11:15:28.226130  263941 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0601 11:15:28.226164  263941 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0601 11:15:28.226178  263941 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0601 11:15:28.226185  263941 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0601 11:15:28.226198  263941 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/addons for local assets ...
	I0601 11:15:28.226245  263941 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files for local assets ...
	I0601 11:15:28.226316  263941 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files/etc/ssl/certs/67082.pem -> 67082.pem in /etc/ssl/certs
	I0601 11:15:28.226388  263941 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0601 11:15:28.232917  263941 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files/etc/ssl/certs/67082.pem --> /etc/ssl/certs/67082.pem (1708 bytes)
	I0601 11:15:28.249498  263941 start.go:309] post-start completed in 143.423491ms
	I0601 11:15:28.249569  263941 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0601 11:15:28.249616  263941 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220601111420-6708
	I0601 11:15:28.281910  263941 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49432 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/newest-cni-20220601111420-6708/id_rsa Username:docker}
	I0601 11:15:28.364106  263941 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0601 11:15:28.367931  263941 fix.go:57] fixHost completed within 4.306519392s
	I0601 11:15:28.367950  263941 start.go:81] releasing machines lock for "newest-cni-20220601111420-6708", held for 4.306556384s
	I0601 11:15:28.368021  263941 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-20220601111420-6708
	I0601 11:15:28.400068  263941 ssh_runner.go:195] Run: systemctl --version
	I0601 11:15:28.400114  263941 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0601 11:15:28.400127  263941 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220601111420-6708
	I0601 11:15:28.400158  263941 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220601111420-6708
	I0601 11:15:28.433938  263941 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49432 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/newest-cni-20220601111420-6708/id_rsa Username:docker}
	I0601 11:15:28.435630  263941 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49432 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/newest-cni-20220601111420-6708/id_rsa Username:docker}
	I0601 11:15:28.540419  263941 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0601 11:15:28.551438  263941 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0601 11:15:28.560441  263941 docker.go:187] disabling docker service ...
	I0601 11:15:28.560487  263941 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0601 11:15:28.569793  263941 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0601 11:15:28.578352  263941 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0601 11:15:28.655832  263941 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0601 11:15:28.735490  263941 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0601 11:15:29.855609  254820 pod_ready.go:102] pod "coredns-5644d7b6d9-5z28m" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 10:59:44 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:15:32.355089  254820 pod_ready.go:102] pod "coredns-5644d7b6d9-5z28m" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 10:59:44 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:15:28.745109  263941 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0601 11:15:28.757663  263941 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*sandbox_image = .*$|sandbox_image = "k8s.gcr.io/pause:3.6"|' -i /etc/containerd/config.toml"
	I0601 11:15:28.765775  263941 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*restrict_oom_score_adj = .*$|restrict_oom_score_adj = false|' -i /etc/containerd/config.toml"
	I0601 11:15:28.774408  263941 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*SystemdCgroup = .*$|SystemdCgroup = false|' -i /etc/containerd/config.toml"
	I0601 11:15:28.781883  263941 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*conf_dir = .*$|conf_dir = "/etc/cni/net.mk"|' -i /etc/containerd/config.toml"
	I0601 11:15:28.789580  263941 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^# imports|imports = ["/etc/containerd/containerd.conf.d/02-containerd.conf"]|' -i /etc/containerd/config.toml"
	I0601 11:15:28.796858  263941 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc/containerd/containerd.conf.d && printf %!s(MISSING) "dmVyc2lvbiA9IDIK" | base64 -d | sudo tee /etc/containerd/containerd.conf.d/02-containerd.conf"
	I0601 11:15:28.808855  263941 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0601 11:15:28.814976  263941 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0601 11:15:28.821274  263941 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0601 11:15:28.890122  263941 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0601 11:15:28.957431  263941 start.go:447] Will wait 60s for socket path /run/containerd/containerd.sock
	I0601 11:15:28.957500  263941 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0601 11:15:28.960944  263941 start.go:468] Will wait 60s for crictl version
	I0601 11:15:28.960999  263941 ssh_runner.go:195] Run: sudo crictl version
	I0601 11:15:28.989353  263941 retry.go:31] will retry after 11.04660288s: Temporary Error: sudo crictl version: Process exited with status 1
	stdout:
	
	stderr:
	time="2022-06-01T11:15:28Z" level=fatal msg="getting the runtime version: rpc error: code = Unknown desc = server is not initialized yet"
	I0601 11:15:34.355560  254820 pod_ready.go:102] pod "coredns-5644d7b6d9-5z28m" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 10:59:44 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:15:36.855139  254820 pod_ready.go:102] pod "coredns-5644d7b6d9-5z28m" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 10:59:44 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:15:40.037202  263941 ssh_runner.go:195] Run: sudo crictl version
	I0601 11:15:40.060106  263941 start.go:477] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.6.4
	RuntimeApiVersion:  v1alpha2
	I0601 11:15:40.060158  263941 ssh_runner.go:195] Run: containerd --version
	I0601 11:15:40.087408  263941 ssh_runner.go:195] Run: containerd --version
	I0601 11:15:40.116134  263941 out.go:177] * Preparing Kubernetes v1.23.6 on containerd 1.6.4 ...
	I0601 11:15:40.117565  263941 cli_runner.go:164] Run: docker network inspect newest-cni-20220601111420-6708 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0601 11:15:40.149078  263941 ssh_runner.go:195] Run: grep 192.168.67.1	host.minikube.internal$ /etc/hosts
	I0601 11:15:40.152412  263941 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.67.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0601 11:15:40.163666  263941 out.go:177]   - kubelet.network-plugin=cni
	I0601 11:15:40.165260  263941 out.go:177]   - kubeadm.pod-network-cidr=192.168.111.111/16
	I0601 11:15:40.166639  263941 out.go:177]   - kubelet.cni-conf-dir=/etc/cni/net.mk
	I0601 11:15:38.855353  254820 pod_ready.go:102] pod "coredns-5644d7b6d9-5z28m" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 10:59:44 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:15:41.355553  254820 pod_ready.go:102] pod "coredns-5644d7b6d9-5z28m" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 10:59:44 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:15:40.167968  263941 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime containerd
	I0601 11:15:40.168021  263941 ssh_runner.go:195] Run: sudo crictl images --output json
	I0601 11:15:40.190473  263941 containerd.go:547] all images are preloaded for containerd runtime.
	I0601 11:15:40.190492  263941 containerd.go:461] Images already preloaded, skipping extraction
	I0601 11:15:40.190538  263941 ssh_runner.go:195] Run: sudo crictl images --output json
	I0601 11:15:40.212585  263941 containerd.go:547] all images are preloaded for containerd runtime.
	I0601 11:15:40.212603  263941 cache_images.go:84] Images are preloaded, skipping loading
	I0601 11:15:40.212648  263941 ssh_runner.go:195] Run: sudo crictl info
	I0601 11:15:40.235894  263941 cni.go:95] Creating CNI manager for ""
	I0601 11:15:40.235921  263941 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0601 11:15:40.235938  263941 kubeadm.go:87] Using pod CIDR: 192.168.111.111/16
	I0601 11:15:40.235956  263941 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:192.168.111.111/16 AdvertiseAddress:192.168.67.2 APIServerPort:8443 KubernetesVersion:v1.23.6 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-20220601111420-6708 NodeName:newest-cni-20220601111420-6708 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota feature-gates:ServerSideApply=true] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.67.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[feature-gates:ServerSideApply=true leader-elect
:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.67.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0601 11:15:40.236110  263941 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.67.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "newest-cni-20220601111420-6708"
	  kubeletExtraArgs:
	    node-ip: 192.168.67.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.67.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	    feature-gates: "ServerSideApply=true"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.23.6
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "192.168.111.111/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "192.168.111.111/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0601 11:15:40.236185  263941 kubeadm.go:961] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.23.6/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cni-conf-dir=/etc/cni/net.mk --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --feature-gates=ServerSideApply=true --hostname-override=newest-cni-20220601111420-6708 --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.67.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.23.6 ClusterName:newest-cni-20220601111420-6708 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:network-plugin Value:cni} {Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16} {Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0601 11:15:40.236231  263941 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.23.6
	I0601 11:15:40.242822  263941 binaries.go:44] Found k8s binaries, skipping transfer
	I0601 11:15:40.242876  263941 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0601 11:15:40.249239  263941 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (612 bytes)
	I0601 11:15:40.261710  263941 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0601 11:15:40.273575  263941 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2195 bytes)
	I0601 11:15:40.285871  263941 ssh_runner.go:195] Run: grep 192.168.67.2	control-plane.minikube.internal$ /etc/hosts
	I0601 11:15:40.288735  263941 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.67.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0601 11:15:40.297596  263941 certs.go:54] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/newest-cni-20220601111420-6708 for IP: 192.168.67.2
	I0601 11:15:40.297679  263941 certs.go:182] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.key
	I0601 11:15:40.297717  263941 certs.go:182] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/proxy-client-ca.key
	I0601 11:15:40.297792  263941 certs.go:298] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/newest-cni-20220601111420-6708/client.key
	I0601 11:15:40.297858  263941 certs.go:298] skipping minikube signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/newest-cni-20220601111420-6708/apiserver.key.c7fa3a9e
	I0601 11:15:40.297891  263941 certs.go:298] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/newest-cni-20220601111420-6708/proxy-client.key
	I0601 11:15:40.297985  263941 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/6708.pem (1338 bytes)
	W0601 11:15:40.298016  263941 certs.go:384] ignoring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/6708_empty.pem, impossibly tiny 0 bytes
	I0601 11:15:40.298027  263941 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca-key.pem (1679 bytes)
	I0601 11:15:40.298054  263941 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca.pem (1082 bytes)
	I0601 11:15:40.298091  263941 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/cert.pem (1123 bytes)
	I0601 11:15:40.298119  263941 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/key.pem (1679 bytes)
	I0601 11:15:40.298157  263941 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files/etc/ssl/certs/67082.pem (1708 bytes)
	I0601 11:15:40.299260  263941 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/newest-cni-20220601111420-6708/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0601 11:15:40.316519  263941 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/newest-cni-20220601111420-6708/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0601 11:15:40.332783  263941 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/newest-cni-20220601111420-6708/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0601 11:15:40.348789  263941 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/newest-cni-20220601111420-6708/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0601 11:15:40.365111  263941 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0601 11:15:40.381257  263941 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0601 11:15:40.397766  263941 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0601 11:15:40.414346  263941 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0601 11:15:40.430808  263941 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/6708.pem --> /usr/share/ca-certificates/6708.pem (1338 bytes)
	I0601 11:15:40.446818  263941 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files/etc/ssl/certs/67082.pem --> /usr/share/ca-certificates/67082.pem (1708 bytes)
	I0601 11:15:40.462868  263941 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0601 11:15:40.479131  263941 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0601 11:15:40.491365  263941 ssh_runner.go:195] Run: openssl version
	I0601 11:15:40.495890  263941 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6708.pem && ln -fs /usr/share/ca-certificates/6708.pem /etc/ssl/certs/6708.pem"
	I0601 11:15:40.502904  263941 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6708.pem
	I0601 11:15:40.505746  263941 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Jun  1 10:24 /usr/share/ca-certificates/6708.pem
	I0601 11:15:40.505788  263941 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6708.pem
	I0601 11:15:40.510219  263941 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6708.pem /etc/ssl/certs/51391683.0"
	I0601 11:15:40.517063  263941 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/67082.pem && ln -fs /usr/share/ca-certificates/67082.pem /etc/ssl/certs/67082.pem"
	I0601 11:15:40.524064  263941 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/67082.pem
	I0601 11:15:40.526999  263941 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Jun  1 10:24 /usr/share/ca-certificates/67082.pem
	I0601 11:15:40.527033  263941 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/67082.pem
	I0601 11:15:40.531472  263941 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/67082.pem /etc/ssl/certs/3ec20f2e.0"
	I0601 11:15:40.537733  263941 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0601 11:15:40.544625  263941 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0601 11:15:40.547474  263941 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Jun  1 10:20 /usr/share/ca-certificates/minikubeCA.pem
	I0601 11:15:40.547517  263941 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0601 11:15:40.552007  263941 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0601 11:15:40.558495  263941 kubeadm.go:395] StartCluster: {Name:newest-cni-20220601111420-6708 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:newest-cni-20220601111420-6708 Namespace:default APIServerName:
minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:network-plugin Value:cni} {Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16} {Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.5.1@sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[Me
tricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0601 11:15:40.558578  263941 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0601 11:15:40.558609  263941 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0601 11:15:40.583148  263941 cri.go:87] found id: "485452f005e99f0f22bb03c4ca8c82fba5f6780c94070d8df7bd99044ffee4b9"
	I0601 11:15:40.583176  263941 cri.go:87] found id: "b507e47c39da0409cb255658e88a3d49752f7f998cf2f1cc34d63f78b7ddd011"
	I0601 11:15:40.583186  263941 cri.go:87] found id: "75894607da19e2350146806345ec03c2d10cca1c7d18ad1af5ec86948940418a"
	I0601 11:15:40.583195  263941 cri.go:87] found id: "4a9c8064a88cb9fe7f7cd29efcb27e351ec7ee84e8b53663018f531249450887"
	I0601 11:15:40.583205  263941 cri.go:87] found id: "e26d2e029235f2fee5bc272de83b26dbe1b0ceaa62fa9f5fd6fc9cad51d82152"
	I0601 11:15:40.583214  263941 cri.go:87] found id: "80ce00d6c8d0224e68bd82723b28724cb9452b78a8ff6d2c4494f3f6e02a6931"
	I0601 11:15:40.583220  263941 cri.go:87] found id: ""
	I0601 11:15:40.583261  263941 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0601 11:15:40.595217  263941 cri.go:114] JSON = null
	W0601 11:15:40.595264  263941 kubeadm.go:402] unpause failed: list paused: list returned 0 containers, but ps returned 6
	I0601 11:15:40.595339  263941 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0601 11:15:40.601849  263941 kubeadm.go:410] found existing configuration files, will attempt cluster restart
	I0601 11:15:40.601867  263941 kubeadm.go:626] restartCluster start
	I0601 11:15:40.601901  263941 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0601 11:15:40.607903  263941 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0601 11:15:40.609082  263941 kubeconfig.go:116] verify returned: extract IP: "newest-cni-20220601111420-6708" does not appear in /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig
	I0601 11:15:40.609970  263941 kubeconfig.go:127] "newest-cni-20220601111420-6708" context is missing from /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig - will repair!
	I0601 11:15:40.611243  263941 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig: {Name:mkb0e16236e54fbc8651999f1dd70854c53de7db Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0601 11:15:40.612890  263941 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0601 11:15:40.619118  263941 api_server.go:165] Checking apiserver status ...
	I0601 11:15:40.619151  263941 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 11:15:40.626770  263941 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 11:15:40.827164  263941 api_server.go:165] Checking apiserver status ...
	I0601 11:15:40.827243  263941 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 11:15:40.835913  263941 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 11:15:41.027171  263941 api_server.go:165] Checking apiserver status ...
	I0601 11:15:41.027231  263941 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 11:15:41.035630  263941 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 11:15:41.226875  263941 api_server.go:165] Checking apiserver status ...
	I0601 11:15:41.226947  263941 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 11:15:41.235260  263941 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 11:15:41.427590  263941 api_server.go:165] Checking apiserver status ...
	I0601 11:15:41.427664  263941 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 11:15:41.436391  263941 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 11:15:41.627713  263941 api_server.go:165] Checking apiserver status ...
	I0601 11:15:41.627790  263941 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 11:15:41.636578  263941 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 11:15:41.827917  263941 api_server.go:165] Checking apiserver status ...
	I0601 11:15:41.827985  263941 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 11:15:41.836472  263941 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 11:15:42.027740  263941 api_server.go:165] Checking apiserver status ...
	I0601 11:15:42.027825  263941 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 11:15:42.036551  263941 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 11:15:42.227897  263941 api_server.go:165] Checking apiserver status ...
	I0601 11:15:42.227973  263941 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 11:15:42.236376  263941 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 11:15:42.427645  263941 api_server.go:165] Checking apiserver status ...
	I0601 11:15:42.427728  263941 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 11:15:42.436268  263941 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 11:15:42.627544  263941 api_server.go:165] Checking apiserver status ...
	I0601 11:15:42.627605  263941 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 11:15:42.636087  263941 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 11:15:42.827365  263941 api_server.go:165] Checking apiserver status ...
	I0601 11:15:42.827443  263941 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 11:15:42.835962  263941 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 11:15:43.027237  263941 api_server.go:165] Checking apiserver status ...
	I0601 11:15:43.027309  263941 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 11:15:43.035670  263941 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 11:15:43.226878  263941 api_server.go:165] Checking apiserver status ...
	I0601 11:15:43.226948  263941 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 11:15:43.235382  263941 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 11:15:43.427671  263941 api_server.go:165] Checking apiserver status ...
	I0601 11:15:43.427744  263941 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 11:15:43.436178  263941 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 11:15:43.627481  263941 api_server.go:165] Checking apiserver status ...
	I0601 11:15:43.627549  263941 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 11:15:43.635795  263941 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 11:15:43.635816  263941 api_server.go:165] Checking apiserver status ...
	I0601 11:15:43.635857  263941 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 11:15:43.643323  263941 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 11:15:43.643343  263941 kubeadm.go:601] needs reconfigure: apiserver error: timed out waiting for the condition
	I0601 11:15:43.643349  263941 kubeadm.go:1092] stopping kube-system containers ...
	I0601 11:15:43.643362  263941 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I0601 11:15:43.643409  263941 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0601 11:15:43.666360  263941 cri.go:87] found id: "485452f005e99f0f22bb03c4ca8c82fba5f6780c94070d8df7bd99044ffee4b9"
	I0601 11:15:43.666384  263941 cri.go:87] found id: "b507e47c39da0409cb255658e88a3d49752f7f998cf2f1cc34d63f78b7ddd011"
	I0601 11:15:43.666392  263941 cri.go:87] found id: "75894607da19e2350146806345ec03c2d10cca1c7d18ad1af5ec86948940418a"
	I0601 11:15:43.666399  263941 cri.go:87] found id: "4a9c8064a88cb9fe7f7cd29efcb27e351ec7ee84e8b53663018f531249450887"
	I0601 11:15:43.666405  263941 cri.go:87] found id: "e26d2e029235f2fee5bc272de83b26dbe1b0ceaa62fa9f5fd6fc9cad51d82152"
	I0601 11:15:43.666411  263941 cri.go:87] found id: "80ce00d6c8d0224e68bd82723b28724cb9452b78a8ff6d2c4494f3f6e02a6931"
	I0601 11:15:43.666428  263941 cri.go:87] found id: ""
	I0601 11:15:43.666434  263941 cri.go:232] Stopping containers: [485452f005e99f0f22bb03c4ca8c82fba5f6780c94070d8df7bd99044ffee4b9 b507e47c39da0409cb255658e88a3d49752f7f998cf2f1cc34d63f78b7ddd011 75894607da19e2350146806345ec03c2d10cca1c7d18ad1af5ec86948940418a 4a9c8064a88cb9fe7f7cd29efcb27e351ec7ee84e8b53663018f531249450887 e26d2e029235f2fee5bc272de83b26dbe1b0ceaa62fa9f5fd6fc9cad51d82152 80ce00d6c8d0224e68bd82723b28724cb9452b78a8ff6d2c4494f3f6e02a6931]
	I0601 11:15:43.666476  263941 ssh_runner.go:195] Run: which crictl
	I0601 11:15:43.669061  263941 ssh_runner.go:195] Run: sudo /usr/bin/crictl stop 485452f005e99f0f22bb03c4ca8c82fba5f6780c94070d8df7bd99044ffee4b9 b507e47c39da0409cb255658e88a3d49752f7f998cf2f1cc34d63f78b7ddd011 75894607da19e2350146806345ec03c2d10cca1c7d18ad1af5ec86948940418a 4a9c8064a88cb9fe7f7cd29efcb27e351ec7ee84e8b53663018f531249450887 e26d2e029235f2fee5bc272de83b26dbe1b0ceaa62fa9f5fd6fc9cad51d82152 80ce00d6c8d0224e68bd82723b28724cb9452b78a8ff6d2c4494f3f6e02a6931
	I0601 11:15:43.693039  263941 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0601 11:15:43.702809  263941 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0601 11:15:43.709582  263941 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5643 Jun  1 11:14 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5656 Jun  1 11:14 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2059 Jun  1 11:14 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5600 Jun  1 11:14 /etc/kubernetes/scheduler.conf
	
	I0601 11:15:43.709641  263941 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0601 11:15:43.716303  263941 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0601 11:15:43.722697  263941 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0601 11:15:43.729346  263941 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0601 11:15:43.729383  263941 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0601 11:15:43.736021  263941 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0601 11:15:43.855489  254820 pod_ready.go:102] pod "coredns-5644d7b6d9-5z28m" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 10:59:44 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:15:45.856143  254820 pod_ready.go:102] pod "coredns-5644d7b6d9-5z28m" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 10:59:44 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:15:48.355487  254820 pod_ready.go:102] pod "coredns-5644d7b6d9-5z28m" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 10:59:44 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:15:43.742199  263941 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0601 11:15:43.744599  263941 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0601 11:15:43.750661  263941 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0601 11:15:43.757080  263941 kubeadm.go:703] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0601 11:15:43.757099  263941 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0601 11:15:43.799332  263941 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0601 11:15:44.640811  263941 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0601 11:15:44.778775  263941 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0601 11:15:44.825201  263941 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0601 11:15:44.878693  263941 api_server.go:51] waiting for apiserver process to appear ...
	I0601 11:15:44.878756  263941 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:15:45.387292  263941 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:15:45.887437  263941 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:15:46.386681  263941 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:15:46.887711  263941 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:15:47.386774  263941 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:15:47.886999  263941 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:15:48.387146  263941 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:15:50.856156  254820 pod_ready.go:102] pod "coredns-5644d7b6d9-5z28m" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 10:59:44 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:15:53.355090  254820 pod_ready.go:102] pod "coredns-5644d7b6d9-5z28m" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 10:59:44 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:15:48.887298  263941 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:15:49.387361  263941 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:15:49.887409  263941 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:15:50.386663  263941 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:15:50.887194  263941 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:15:50.963594  263941 api_server.go:71] duration metric: took 6.084905922s to wait for apiserver process to appear ...
	I0601 11:15:50.963625  263941 api_server.go:87] waiting for apiserver healthz status ...
	I0601 11:15:50.963637  263941 api_server.go:240] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0601 11:15:50.964071  263941 api_server.go:256] stopped: https://192.168.67.2:8443/healthz: Get "https://192.168.67.2:8443/healthz": dial tcp 192.168.67.2:8443: connect: connection refused
	I0601 11:15:51.464797  263941 api_server.go:240] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0601 11:15:54.285946  263941 api_server.go:266] https://192.168.67.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0601 11:15:54.285972  263941 api_server.go:102] status: https://192.168.67.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0601 11:15:54.464237  263941 api_server.go:240] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0601 11:15:54.470757  263941 api_server.go:266] https://192.168.67.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0601 11:15:54.470781  263941 api_server.go:102] status: https://192.168.67.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0601 11:15:54.965028  263941 api_server.go:240] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0601 11:15:54.969042  263941 api_server.go:266] https://192.168.67.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0601 11:15:54.969073  263941 api_server.go:102] status: https://192.168.67.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0601 11:15:55.464581  263941 api_server.go:240] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0601 11:15:55.468499  263941 api_server.go:266] https://192.168.67.2:8443/healthz returned 200:
	ok
	I0601 11:15:55.474952  263941 api_server.go:140] control plane version: v1.23.6
	I0601 11:15:55.474977  263941 api_server.go:130] duration metric: took 4.511346167s to wait for apiserver health ...
	I0601 11:15:55.474987  263941 cni.go:95] Creating CNI manager for ""
	I0601 11:15:55.474993  263941 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0601 11:15:55.476649  263941 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0601 11:15:55.478132  263941 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0601 11:15:55.481790  263941 cni.go:189] applying CNI manifest using /var/lib/minikube/binaries/v1.23.6/kubectl ...
	I0601 11:15:55.481811  263941 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0601 11:15:55.495316  263941 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0601 11:15:56.321166  263941 system_pods.go:43] waiting for kube-system pods to appear ...
	I0601 11:15:56.328415  263941 system_pods.go:59] 9 kube-system pods found
	I0601 11:15:56.328445  263941 system_pods.go:61] "coredns-64897985d-84z4b" [61675132-c8b8-4faa-81c0-afc49bcb9115] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
	I0601 11:15:56.328454  263941 system_pods.go:61] "etcd-newest-cni-20220601111420-6708" [cdadc9cc-7472-44f1-8727-1510dd722c1e] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0601 11:15:56.328472  263941 system_pods.go:61] "kindnet-mrmdr" [a0f0d07c-f270-4d94-a1e1-739b17c1abfd] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0601 11:15:56.328482  263941 system_pods.go:61] "kube-apiserver-newest-cni-20220601111420-6708" [2c130ee1-e9f5-4d2f-8871-e71f31fb1c74] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0601 11:15:56.328500  263941 system_pods.go:61] "kube-controller-manager-newest-cni-20220601111420-6708" [5dac330c-fddb-4715-90fd-e1b77c0f6b67] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0601 11:15:56.328509  263941 system_pods.go:61] "kube-proxy-n497l" [015d9acb-73fb-47e5-bb6c-9856dc97937f] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0601 11:15:56.328515  263941 system_pods.go:61] "kube-scheduler-newest-cni-20220601111420-6708" [4dc87b0e-7ab2-41a0-9cc4-4c358a05e707] Running
	I0601 11:15:56.328526  263941 system_pods.go:61] "metrics-server-b955d9d8-l7q2p" [7dc19801-604c-4dea-8f7c-f149d7c519db] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
	I0601 11:15:56.328535  263941 system_pods.go:61] "storage-provisioner" [8afd3e8e-ea29-4b3e-b4dd-ce13b93d0469] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
	I0601 11:15:56.328539  263941 system_pods.go:74] duration metric: took 7.354592ms to wait for pod list to return data ...
	I0601 11:15:56.328550  263941 node_conditions.go:102] verifying NodePressure condition ...
	I0601 11:15:56.331379  263941 node_conditions.go:122] node storage ephemeral capacity is 304695084Ki
	I0601 11:15:56.331405  263941 node_conditions.go:123] node cpu capacity is 8
	I0601 11:15:56.331415  263941 node_conditions.go:105] duration metric: took 2.860329ms to run NodePressure ...
	I0601 11:15:56.331429  263941 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0601 11:15:56.521154  263941 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0601 11:15:56.528009  263941 ops.go:34] apiserver oom_adj: -16
	I0601 11:15:56.528032  263941 kubeadm.go:630] restartCluster took 15.926157917s
	I0601 11:15:56.528040  263941 kubeadm.go:397] StartCluster complete in 15.969552087s
	I0601 11:15:56.528055  263941 settings.go:142] acquiring lock: {Name:mk20a847233fa50399ab0a24280bffb8d8dbd41b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0601 11:15:56.528150  263941 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig
	I0601 11:15:56.529675  263941 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig: {Name:mkb0e16236e54fbc8651999f1dd70854c53de7db Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0601 11:15:56.533017  263941 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "newest-cni-20220601111420-6708" rescaled to 1
	I0601 11:15:56.533078  263941 start.go:208] Will wait 6m0s for node &{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0601 11:15:56.533096  263941 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0601 11:15:56.535966  263941 out.go:177] * Verifying Kubernetes components...
	I0601 11:15:56.533184  263941 addons.go:415] enableAddons start: toEnable=map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true], additional=[]
	I0601 11:15:56.533325  263941 config.go:178] Loaded profile config "newest-cni-20220601111420-6708": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.6
	I0601 11:15:56.537639  263941 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0601 11:15:56.537656  263941 addons.go:65] Setting metrics-server=true in profile "newest-cni-20220601111420-6708"
	I0601 11:15:56.537672  263941 addons.go:153] Setting addon metrics-server=true in "newest-cni-20220601111420-6708"
	I0601 11:15:56.537676  263941 addons.go:65] Setting default-storageclass=true in profile "newest-cni-20220601111420-6708"
	W0601 11:15:56.537684  263941 addons.go:165] addon metrics-server should already be in state true
	I0601 11:15:56.537694  263941 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-20220601111420-6708"
	I0601 11:15:56.537644  263941 addons.go:65] Setting storage-provisioner=true in profile "newest-cni-20220601111420-6708"
	I0601 11:15:56.537706  263941 addons.go:65] Setting dashboard=true in profile "newest-cni-20220601111420-6708"
	I0601 11:15:56.537742  263941 host.go:66] Checking if "newest-cni-20220601111420-6708" exists ...
	I0601 11:15:56.537756  263941 addons.go:153] Setting addon storage-provisioner=true in "newest-cni-20220601111420-6708"
	W0601 11:15:56.537788  263941 addons.go:165] addon storage-provisioner should already be in state true
	I0601 11:15:56.537848  263941 host.go:66] Checking if "newest-cni-20220601111420-6708" exists ...
	I0601 11:15:56.537761  263941 addons.go:153] Setting addon dashboard=true in "newest-cni-20220601111420-6708"
	W0601 11:15:56.537913  263941 addons.go:165] addon dashboard should already be in state true
	I0601 11:15:56.537975  263941 host.go:66] Checking if "newest-cni-20220601111420-6708" exists ...
	I0601 11:15:56.538058  263941 cli_runner.go:164] Run: docker container inspect newest-cni-20220601111420-6708 --format={{.State.Status}}
	I0601 11:15:56.538239  263941 cli_runner.go:164] Run: docker container inspect newest-cni-20220601111420-6708 --format={{.State.Status}}
	I0601 11:15:56.538387  263941 cli_runner.go:164] Run: docker container inspect newest-cni-20220601111420-6708 --format={{.State.Status}}
	I0601 11:15:56.538439  263941 cli_runner.go:164] Run: docker container inspect newest-cni-20220601111420-6708 --format={{.State.Status}}
	I0601 11:15:56.585758  263941 out.go:177]   - Using image k8s.gcr.io/echoserver:1.4
	I0601 11:15:56.587350  263941 out.go:177]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I0601 11:15:56.588849  263941 out.go:177]   - Using image kubernetesui/dashboard:v2.5.1
	I0601 11:15:56.590411  263941 addons.go:348] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0601 11:15:56.590428  263941 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0601 11:15:56.590467  263941 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220601111420-6708
	I0601 11:15:56.588805  263941 addons.go:348] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0601 11:15:56.590499  263941 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0601 11:15:56.592950  263941 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0601 11:15:56.590546  263941 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220601111420-6708
	I0601 11:15:56.595053  263941 addons.go:348] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0601 11:15:56.595071  263941 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0601 11:15:56.595138  263941 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220601111420-6708
	I0601 11:15:56.603468  263941 addons.go:153] Setting addon default-storageclass=true in "newest-cni-20220601111420-6708"
	W0601 11:15:56.603493  263941 addons.go:165] addon default-storageclass should already be in state true
	I0601 11:15:56.603522  263941 host.go:66] Checking if "newest-cni-20220601111420-6708" exists ...
	I0601 11:15:56.603903  263941 cli_runner.go:164] Run: docker container inspect newest-cni-20220601111420-6708 --format={{.State.Status}}
	I0601 11:15:56.619336  263941 api_server.go:51] waiting for apiserver process to appear ...
	I0601 11:15:56.619430  263941 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:15:56.619353  263941 start.go:786] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0601 11:15:56.636422  263941 api_server.go:71] duration metric: took 103.308361ms to wait for apiserver process to appear ...
	I0601 11:15:56.636452  263941 api_server.go:87] waiting for apiserver healthz status ...
	I0601 11:15:56.636465  263941 api_server.go:240] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0601 11:15:56.636778  263941 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49432 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/newest-cni-20220601111420-6708/id_rsa Username:docker}
	I0601 11:15:56.641977  263941 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49432 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/newest-cni-20220601111420-6708/id_rsa Username:docker}
	I0601 11:15:56.642020  263941 api_server.go:266] https://192.168.67.2:8443/healthz returned 200:
	ok
	I0601 11:15:56.642243  263941 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49432 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/newest-cni-20220601111420-6708/id_rsa Username:docker}
	I0601 11:15:56.644077  263941 api_server.go:140] control plane version: v1.23.6
	I0601 11:15:56.644097  263941 api_server.go:130] duration metric: took 7.638012ms to wait for apiserver health ...
	I0601 11:15:56.644106  263941 system_pods.go:43] waiting for kube-system pods to appear ...
	I0601 11:15:56.646495  263941 addons.go:348] installing /etc/kubernetes/addons/storageclass.yaml
	I0601 11:15:56.646517  263941 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0601 11:15:56.646565  263941 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220601111420-6708
	I0601 11:15:56.650901  263941 system_pods.go:59] 9 kube-system pods found
	I0601 11:15:56.650933  263941 system_pods.go:61] "coredns-64897985d-84z4b" [61675132-c8b8-4faa-81c0-afc49bcb9115] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
	I0601 11:15:56.650944  263941 system_pods.go:61] "etcd-newest-cni-20220601111420-6708" [cdadc9cc-7472-44f1-8727-1510dd722c1e] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0601 11:15:56.650954  263941 system_pods.go:61] "kindnet-mrmdr" [a0f0d07c-f270-4d94-a1e1-739b17c1abfd] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0601 11:15:56.650969  263941 system_pods.go:61] "kube-apiserver-newest-cni-20220601111420-6708" [2c130ee1-e9f5-4d2f-8871-e71f31fb1c74] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0601 11:15:56.650987  263941 system_pods.go:61] "kube-controller-manager-newest-cni-20220601111420-6708" [5dac330c-fddb-4715-90fd-e1b77c0f6b67] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0601 11:15:56.651000  263941 system_pods.go:61] "kube-proxy-n497l" [015d9acb-73fb-47e5-bb6c-9856dc97937f] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0601 11:15:56.651017  263941 system_pods.go:61] "kube-scheduler-newest-cni-20220601111420-6708" [4dc87b0e-7ab2-41a0-9cc4-4c358a05e707] Running
	I0601 11:15:56.651030  263941 system_pods.go:61] "metrics-server-b955d9d8-l7q2p" [7dc19801-604c-4dea-8f7c-f149d7c519db] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
	I0601 11:15:56.651042  263941 system_pods.go:61] "storage-provisioner" [8afd3e8e-ea29-4b3e-b4dd-ce13b93d0469] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
	I0601 11:15:56.651048  263941 system_pods.go:74] duration metric: took 6.936145ms to wait for pod list to return data ...
	I0601 11:15:56.651060  263941 default_sa.go:34] waiting for default service account to be created ...
	I0601 11:15:56.653680  263941 default_sa.go:45] found service account: "default"
	I0601 11:15:56.653708  263941 default_sa.go:55] duration metric: took 2.640942ms for default service account to be created ...
	I0601 11:15:56.653719  263941 kubeadm.go:572] duration metric: took 120.609786ms to wait for : map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] ...
	I0601 11:15:56.653752  263941 node_conditions.go:102] verifying NodePressure condition ...
	I0601 11:15:56.656341  263941 node_conditions.go:122] node storage ephemeral capacity is 304695084Ki
	I0601 11:15:56.656360  263941 node_conditions.go:123] node cpu capacity is 8
	I0601 11:15:56.656374  263941 node_conditions.go:105] duration metric: took 2.616904ms to run NodePressure ...
	I0601 11:15:56.656386  263941 start.go:213] waiting for startup goroutines ...
	I0601 11:15:56.684635  263941 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49432 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/newest-cni-20220601111420-6708/id_rsa Username:docker}
	I0601 11:15:56.729362  263941 addons.go:348] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0601 11:15:56.729387  263941 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0601 11:15:56.737513  263941 addons.go:348] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0601 11:15:56.737533  263941 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1849 bytes)
	I0601 11:15:56.738082  263941 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0601 11:15:56.742656  263941 addons.go:348] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0601 11:15:56.742678  263941 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0601 11:15:56.751813  263941 addons.go:348] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0601 11:15:56.751842  263941 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0601 11:15:56.756435  263941 addons.go:348] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0601 11:15:56.756456  263941 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0601 11:15:56.766690  263941 addons.go:348] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0601 11:15:56.766709  263941 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0601 11:15:56.771728  263941 addons.go:348] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0601 11:15:56.771748  263941 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4196 bytes)
	I0601 11:15:56.779554  263941 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0601 11:15:56.780709  263941 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0601 11:15:56.785464  263941 addons.go:348] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0601 11:15:56.785491  263941 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0601 11:15:56.861755  263941 addons.go:348] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0601 11:15:56.861784  263941 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0601 11:15:56.885416  263941 addons.go:348] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0601 11:15:56.885451  263941 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0601 11:15:56.964682  263941 addons.go:348] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0601 11:15:56.964712  263941 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0601 11:15:56.981552  263941 addons.go:348] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0601 11:15:56.981583  263941 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0601 11:15:57.060456  263941 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0601 11:15:57.254453  263941 addons.go:386] Verifying addon metrics-server=true in "newest-cni-20220601111420-6708"
	I0601 11:15:57.416236  263941 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I0601 11:15:57.417673  263941 addons.go:417] enableAddons completed in 884.4947ms
	I0601 11:15:57.455697  263941 start.go:504] kubectl: 1.24.1, cluster: 1.23.6 (minor skew: 1)
	I0601 11:15:57.458150  263941 out.go:177] * Done! kubectl is now configured to use "newest-cni-20220601111420-6708" cluster and "default" namespace by default
	I0601 11:15:55.356481  254820 pod_ready.go:102] pod "coredns-5644d7b6d9-5z28m" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 10:59:44 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:15:57.855372  254820 pod_ready.go:102] pod "coredns-5644d7b6d9-5z28m" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 10:59:44 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:15:59.855419  254820 pod_ready.go:102] pod "coredns-5644d7b6d9-5z28m" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 10:59:44 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:16:02.354744  254820 pod_ready.go:102] pod "coredns-5644d7b6d9-5z28m" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 10:59:44 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:16:04.355179  254820 pod_ready.go:102] pod "coredns-5644d7b6d9-5z28m" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 10:59:44 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:16:06.355418  254820 pod_ready.go:102] pod "coredns-5644d7b6d9-5z28m" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 10:59:44 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:16:08.355954  254820 pod_ready.go:102] pod "coredns-5644d7b6d9-5z28m" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 10:59:44 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID
	f1db42bbd17fa       6de166512aa22       3 minutes ago       Exited              kindnet-cni               3                   20ed2db10bff6
	4a8be0c7cfc53       4c03754524064       12 minutes ago      Running             kube-proxy                0                   043df8eb6f8fb
	d49ab0e8a34f4       8fa62c12256df       12 minutes ago      Running             kube-apiserver            0                   31f96fd01399b
	c32cb0a91408a       df7b72818ad2e       12 minutes ago      Running             kube-controller-manager   0                   87ef42c5de136
	a985029383eb2       595f327f224a4       12 minutes ago      Running             kube-scheduler            0                   a4a80ab623aae
	b8dd730d917c4       25f8c7f3da61c       12 minutes ago      Running             etcd                      0                   dfde8cf669db7
	
	* 
	* ==> containerd <==
	* -- Logs begin at Wed 2022-06-01 11:03:36 UTC, end at Wed 2022-06-01 11:16:14 UTC. --
	Jun 01 11:09:31 embed-certs-20220601110327-6708 containerd[516]: time="2022-06-01T11:09:31.490278883Z" level=warning msg="cleaning up after shim disconnected" id=44a64d6574af41b7959f71d1dab2a88484c78c34edb54a7a824ddd43a44b981e namespace=k8s.io
	Jun 01 11:09:31 embed-certs-20220601110327-6708 containerd[516]: time="2022-06-01T11:09:31.490292961Z" level=info msg="cleaning up dead shim"
	Jun 01 11:09:31 embed-certs-20220601110327-6708 containerd[516]: time="2022-06-01T11:09:31.499482545Z" level=warning msg="cleanup warnings time=\"2022-06-01T11:09:31Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2407 runtime=io.containerd.runc.v2\n"
	Jun 01 11:09:32 embed-certs-20220601110327-6708 containerd[516]: time="2022-06-01T11:09:32.336053606Z" level=info msg="RemoveContainer for \"303244519eacb93040778925202eb35640233defc4ec16bdee987993557c7494\""
	Jun 01 11:09:32 embed-certs-20220601110327-6708 containerd[516]: time="2022-06-01T11:09:32.342792466Z" level=info msg="RemoveContainer for \"303244519eacb93040778925202eb35640233defc4ec16bdee987993557c7494\" returns successfully"
	Jun 01 11:09:43 embed-certs-20220601110327-6708 containerd[516]: time="2022-06-01T11:09:43.691241193Z" level=info msg="CreateContainer within sandbox \"20ed2db10bff6252ad2001c172710e70a53dd349d97b8a17babf3a47f9171c43\" for container &ContainerMetadata{Name:kindnet-cni,Attempt:2,}"
	Jun 01 11:09:43 embed-certs-20220601110327-6708 containerd[516]: time="2022-06-01T11:09:43.704218206Z" level=info msg="CreateContainer within sandbox \"20ed2db10bff6252ad2001c172710e70a53dd349d97b8a17babf3a47f9171c43\" for &ContainerMetadata{Name:kindnet-cni,Attempt:2,} returns container id \"6adba67c52be0448acfaf806e89c061a9bebadc9090b607e58a012634b901e65\""
	Jun 01 11:09:43 embed-certs-20220601110327-6708 containerd[516]: time="2022-06-01T11:09:43.704753941Z" level=info msg="StartContainer for \"6adba67c52be0448acfaf806e89c061a9bebadc9090b607e58a012634b901e65\""
	Jun 01 11:09:43 embed-certs-20220601110327-6708 containerd[516]: time="2022-06-01T11:09:43.772300743Z" level=info msg="StartContainer for \"6adba67c52be0448acfaf806e89c061a9bebadc9090b607e58a012634b901e65\" returns successfully"
	Jun 01 11:12:23 embed-certs-20220601110327-6708 containerd[516]: time="2022-06-01T11:12:23.994491240Z" level=info msg="shim disconnected" id=6adba67c52be0448acfaf806e89c061a9bebadc9090b607e58a012634b901e65
	Jun 01 11:12:23 embed-certs-20220601110327-6708 containerd[516]: time="2022-06-01T11:12:23.994563158Z" level=warning msg="cleaning up after shim disconnected" id=6adba67c52be0448acfaf806e89c061a9bebadc9090b607e58a012634b901e65 namespace=k8s.io
	Jun 01 11:12:23 embed-certs-20220601110327-6708 containerd[516]: time="2022-06-01T11:12:23.994577765Z" level=info msg="cleaning up dead shim"
	Jun 01 11:12:24 embed-certs-20220601110327-6708 containerd[516]: time="2022-06-01T11:12:24.004045038Z" level=warning msg="cleanup warnings time=\"2022-06-01T11:12:23Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2504 runtime=io.containerd.runc.v2\n"
	Jun 01 11:12:24 embed-certs-20220601110327-6708 containerd[516]: time="2022-06-01T11:12:24.645186775Z" level=info msg="RemoveContainer for \"44a64d6574af41b7959f71d1dab2a88484c78c34edb54a7a824ddd43a44b981e\""
	Jun 01 11:12:24 embed-certs-20220601110327-6708 containerd[516]: time="2022-06-01T11:12:24.649222973Z" level=info msg="RemoveContainer for \"44a64d6574af41b7959f71d1dab2a88484c78c34edb54a7a824ddd43a44b981e\" returns successfully"
	Jun 01 11:12:51 embed-certs-20220601110327-6708 containerd[516]: time="2022-06-01T11:12:51.691014066Z" level=info msg="CreateContainer within sandbox \"20ed2db10bff6252ad2001c172710e70a53dd349d97b8a17babf3a47f9171c43\" for container &ContainerMetadata{Name:kindnet-cni,Attempt:3,}"
	Jun 01 11:12:51 embed-certs-20220601110327-6708 containerd[516]: time="2022-06-01T11:12:51.703587307Z" level=info msg="CreateContainer within sandbox \"20ed2db10bff6252ad2001c172710e70a53dd349d97b8a17babf3a47f9171c43\" for &ContainerMetadata{Name:kindnet-cni,Attempt:3,} returns container id \"f1db42bbd17faeafd4cd3be2854a922ebf474bf60a0d563922769dd22d828187\""
	Jun 01 11:12:51 embed-certs-20220601110327-6708 containerd[516]: time="2022-06-01T11:12:51.704136363Z" level=info msg="StartContainer for \"f1db42bbd17faeafd4cd3be2854a922ebf474bf60a0d563922769dd22d828187\""
	Jun 01 11:12:51 embed-certs-20220601110327-6708 containerd[516]: time="2022-06-01T11:12:51.769975659Z" level=info msg="StartContainer for \"f1db42bbd17faeafd4cd3be2854a922ebf474bf60a0d563922769dd22d828187\" returns successfully"
	Jun 01 11:15:32 embed-certs-20220601110327-6708 containerd[516]: time="2022-06-01T11:15:32.089351164Z" level=info msg="shim disconnected" id=f1db42bbd17faeafd4cd3be2854a922ebf474bf60a0d563922769dd22d828187
	Jun 01 11:15:32 embed-certs-20220601110327-6708 containerd[516]: time="2022-06-01T11:15:32.089416099Z" level=warning msg="cleaning up after shim disconnected" id=f1db42bbd17faeafd4cd3be2854a922ebf474bf60a0d563922769dd22d828187 namespace=k8s.io
	Jun 01 11:15:32 embed-certs-20220601110327-6708 containerd[516]: time="2022-06-01T11:15:32.089443667Z" level=info msg="cleaning up dead shim"
	Jun 01 11:15:32 embed-certs-20220601110327-6708 containerd[516]: time="2022-06-01T11:15:32.098608733Z" level=warning msg="cleanup warnings time=\"2022-06-01T11:15:32Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2605 runtime=io.containerd.runc.v2\n"
	Jun 01 11:15:32 embed-certs-20220601110327-6708 containerd[516]: time="2022-06-01T11:15:32.986040568Z" level=info msg="RemoveContainer for \"6adba67c52be0448acfaf806e89c061a9bebadc9090b607e58a012634b901e65\""
	Jun 01 11:15:32 embed-certs-20220601110327-6708 containerd[516]: time="2022-06-01T11:15:32.990700608Z" level=info msg="RemoveContainer for \"6adba67c52be0448acfaf806e89c061a9bebadc9090b607e58a012634b901e65\" returns successfully"
	
	* 
	* ==> describe nodes <==
	* Name:               embed-certs-20220601110327-6708
	Roles:              control-plane,master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-20220601110327-6708
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=4a356b2b7b41c6be3e1e342298908c27bb98ce92
	                    minikube.k8s.io/name=embed-certs-20220601110327-6708
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2022_06_01T11_03_56_0700
	                    minikube.k8s.io/version=v1.26.0-beta.1
	                    node-role.kubernetes.io/control-plane=
	                    node-role.kubernetes.io/master=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 01 Jun 2022 11:03:52 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-20220601110327-6708
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 01 Jun 2022 11:16:14 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 01 Jun 2022 11:14:13 +0000   Wed, 01 Jun 2022 11:03:49 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 01 Jun 2022 11:14:13 +0000   Wed, 01 Jun 2022 11:03:49 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 01 Jun 2022 11:14:13 +0000   Wed, 01 Jun 2022 11:03:49 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Wed, 01 Jun 2022 11:14:13 +0000   Wed, 01 Jun 2022 11:03:49 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    embed-certs-20220601110327-6708
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304695084Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32873824Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304695084Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32873824Ki
	  pods:               110
	System Info:
	  Machine ID:                 e0d7477b601740b2a7c32c13851e505c
	  System UUID:                d600b159-ea34-4ea3-ab62-e86c595f06ef
	  Boot ID:                    6ec46557-2d76-4d83-8353-3ee04fd961c4
	  Kernel Version:             5.13.0-1027-gcp
	  OS Image:                   Ubuntu 20.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.6.4
	  Kubelet Version:            v1.23.6
	  Kube-Proxy Version:         v1.23.6
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                       ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-embed-certs-20220601110327-6708                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (0%!)(MISSING)       0 (0%!)(MISSING)         12m
	  kube-system                 kindnet-92tfl                                              100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      12m
	  kube-system                 kube-apiserver-embed-certs-20220601110327-6708             250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-controller-manager-embed-certs-20220601110327-6708    200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-proxy-99lsz                                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-scheduler-embed-certs-20220601110327-6708             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%!)(MISSING)   100m (1%!)(MISSING)
	  memory             150Mi (0%!)(MISSING)  50Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From        Message
	  ----    ------                   ----               ----        -------
	  Normal  Starting                 12m                kube-proxy  
	  Normal  NodeHasSufficientMemory  12m (x5 over 12m)  kubelet     Node embed-certs-20220601110327-6708 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12m (x4 over 12m)  kubelet     Node embed-certs-20220601110327-6708 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12m (x4 over 12m)  kubelet     Node embed-certs-20220601110327-6708 status is now: NodeHasSufficientPID
	  Normal  Starting                 12m                kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  12m                kubelet     Node embed-certs-20220601110327-6708 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12m                kubelet     Node embed-certs-20220601110327-6708 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12m                kubelet     Node embed-certs-20220601110327-6708 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  12m                kubelet     Updated Node Allocatable limit across pods
	
	* 
	* ==> dmesg <==
	* [Jun 1 10:57] IPv4: martian source 10.244.0.2 from 10.244.0.2, on dev bridge
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff ce 23 50 3b c9 fd 08 06
	[  +0.595787] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ce 23 50 3b c9 fd 08 06
	[  +0.586201] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 66 8a d2 ee 2a 9a 08 06
	[Jun 1 10:58] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 9a c7 83 3b 8c 1d 08 06
	[  +0.000302] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 66 8a d2 ee 2a 9a 08 06
	[ +12.725641] IPv4: martian source 10.244.0.2 from 10.244.0.2, on dev bridge
	[  +0.000013] ll header: 00000000: ff ff ff ff ff ff 3a 82 da d0 8c 8d 08 06
	[  +0.906380] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 3a 82 da d0 8c 8d 08 06
	[  +0.302806] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff aa d1 05 1a ef 66 08 06
	[ +13.606547] IPv4: martian source 10.85.0.2 from 10.85.0.2, on dev cni0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 7e ad f3 21 27 5b 08 06
	[  +8.626375] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff f6 c1 ef be 01 dd 08 06
	[  +0.000348] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff aa d1 05 1a ef 66 08 06
	[  +8.278909] IPv4: martian source 10.85.0.2 from 10.85.0.2, on dev cni0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 72 fb cd eb 09 11 08 06
	[Jun 1 11:01] IPv4: martian destination 127.0.0.11 from 10.244.0.2, dev vethb04b9442
	
	* 
	* ==> etcd [b8dd730d917c46f1998a84680d8f66c7fe1a671f92381a72b93c7e59652409f2] <==
	* {"level":"info","ts":"2022-06-01T11:03:50.084Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2022-06-01T11:03:50.084Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became leader at term 2"}
	{"level":"info","ts":"2022-06-01T11:03:50.084Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2022-06-01T11:03:50.084Z","caller":"etcdserver/server.go:2027","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:embed-certs-20220601110327-6708 ClientURLs:[https://192.168.76.2:2379]}","request-path":"/0/members/ea7e25599daad906/attributes","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2022-06-01T11:03:50.084Z","caller":"etcdserver/server.go:2476","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2022-06-01T11:03:50.084Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-06-01T11:03:50.084Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-06-01T11:03:50.085Z","caller":"etcdmain/main.go:47","msg":"notifying init daemon"}
	{"level":"info","ts":"2022-06-01T11:03:50.085Z","caller":"etcdmain/main.go:53","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2022-06-01T11:03:50.085Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","cluster-version":"3.5"}
	{"level":"info","ts":"2022-06-01T11:03:50.085Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2022-06-01T11:03:50.085Z","caller":"etcdserver/server.go:2500","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2022-06-01T11:03:50.086Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.76.2:2379"}
	{"level":"info","ts":"2022-06-01T11:03:50.086Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2022-06-01T11:06:59.331Z","caller":"traceutil/trace.go:171","msg":"trace[993403062] linearizableReadLoop","detail":"{readStateIndex:565; appliedIndex:565; }","duration":"164.749443ms","start":"2022-06-01T11:06:59.166Z","end":"2022-06-01T11:06:59.331Z","steps":["trace[993403062] 'read index received'  (duration: 164.741295ms)","trace[993403062] 'applied index is now lower than readState.Index'  (duration: 7.261µs)"],"step_count":2}
	{"level":"warn","ts":"2022-06-01T11:06:59.332Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"166.049774ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/embed-certs-20220601110327-6708\" ","response":"range_response_count:1 size:4776"}
	{"level":"info","ts":"2022-06-01T11:06:59.332Z","caller":"traceutil/trace.go:171","msg":"trace[243859244] range","detail":"{range_begin:/registry/minions/embed-certs-20220601110327-6708; range_end:; response_count:1; response_revision:516; }","duration":"166.144768ms","start":"2022-06-01T11:06:59.166Z","end":"2022-06-01T11:06:59.332Z","steps":["trace[243859244] 'agreement among raft nodes before linearized reading'  (duration: 164.864212ms)"],"step_count":1}
	{"level":"info","ts":"2022-06-01T11:13:50.105Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":546}
	{"level":"info","ts":"2022-06-01T11:13:50.106Z","caller":"mvcc/kvstore_compaction.go:57","msg":"finished scheduled compaction","compact-revision":546,"took":"429.021µs"}
	{"level":"warn","ts":"2022-06-01T11:14:26.753Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"233.709122ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/events/default/busybox.16f47a88b4a70945\" ","response":"range_response_count:1 size:676"}
	{"level":"info","ts":"2022-06-01T11:14:26.753Z","caller":"traceutil/trace.go:171","msg":"trace[840453810] range","detail":"{range_begin:/registry/events/default/busybox.16f47a88b4a70945; range_end:; response_count:1; response_revision:652; }","duration":"233.80507ms","start":"2022-06-01T11:14:26.520Z","end":"2022-06-01T11:14:26.753Z","steps":["trace[840453810] 'agreement among raft nodes before linearized reading'  (duration: 46.352891ms)","trace[840453810] 'range keys from in-memory index tree'  (duration: 187.31702ms)"],"step_count":2}
	{"level":"warn","ts":"2022-06-01T11:14:26.753Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"226.054261ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/mutatingwebhookconfigurations/\" range_end:\"/registry/mutatingwebhookconfigurations0\" count_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2022-06-01T11:14:26.753Z","caller":"traceutil/trace.go:171","msg":"trace[91545500] range","detail":"{range_begin:/registry/mutatingwebhookconfigurations/; range_end:/registry/mutatingwebhookconfigurations0; response_count:0; response_revision:652; }","duration":"226.231448ms","start":"2022-06-01T11:14:26.527Z","end":"2022-06-01T11:14:26.753Z","steps":["trace[91545500] 'agreement among raft nodes before linearized reading'  (duration: 38.693159ms)","trace[91545500] 'count revisions from in-memory index tree'  (duration: 187.331729ms)"],"step_count":2}
	{"level":"warn","ts":"2022-06-01T11:14:27.037Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"181.740284ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/events/kube-system/storage-provisioner.16f47a507eb9e79b\" ","response":"range_response_count:1 size:724"}
	{"level":"info","ts":"2022-06-01T11:14:27.037Z","caller":"traceutil/trace.go:171","msg":"trace[695362410] range","detail":"{range_begin:/registry/events/kube-system/storage-provisioner.16f47a507eb9e79b; range_end:; response_count:1; response_revision:653; }","duration":"181.817862ms","start":"2022-06-01T11:14:26.856Z","end":"2022-06-01T11:14:27.037Z","steps":["trace[695362410] 'agreement among raft nodes before linearized reading'  (duration: 83.115698ms)","trace[695362410] 'range keys from in-memory index tree'  (duration: 98.584798ms)"],"step_count":2}
	
	* 
	* ==> kernel <==
	*  11:16:14 up 58 min,  0 users,  load average: 3.41, 3.45, 2.46
	Linux embed-certs-20220601110327-6708 5.13.0-1027-gcp #32~20.04.1-Ubuntu SMP Thu May 26 10:53:08 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.4 LTS"
	
	* 
	* ==> kube-apiserver [d49ab0e8a34f4fc8f9cd1e7a8cba837be3548801379cde0d444aadf2a833b32a] <==
	* I0601 11:03:52.353146       1 shared_informer.go:247] Caches are synced for crd-autoregister 
	I0601 11:03:52.353180       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0601 11:03:52.353221       1 shared_informer.go:247] Caches are synced for cluster_authentication_trust_controller 
	I0601 11:03:52.353225       1 cache.go:39] Caches are synced for autoregister controller
	I0601 11:03:52.353496       1 apf_controller.go:322] Running API Priority and Fairness config worker
	I0601 11:03:52.354371       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0601 11:03:53.223928       1 controller.go:132] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I0601 11:03:53.230007       1 storage_scheduling.go:93] created PriorityClass system-node-critical with value 2000001000
	I0601 11:03:53.232751       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0601 11:03:53.233028       1 storage_scheduling.go:93] created PriorityClass system-cluster-critical with value 2000000000
	I0601 11:03:53.233046       1 storage_scheduling.go:109] all system priority classes are created successfully or already exist.
	I0601 11:03:53.654795       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0601 11:03:53.685311       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0601 11:03:53.775744       1 alloc.go:329] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1]
	W0601 11:03:53.783710       1 lease.go:233] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I0601 11:03:53.784644       1 controller.go:611] quota admission added evaluator for: endpoints
	I0601 11:03:53.788004       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0601 11:03:54.362824       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0601 11:03:55.411558       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0601 11:03:55.418495       1 alloc.go:329] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I0601 11:03:55.427653       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0601 11:04:00.570330       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0601 11:04:08.019838       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	I0601 11:04:08.117524       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	I0601 11:04:08.961758       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	
	* 
	* ==> kube-controller-manager [c32cb0a91408a09b3c11ff34a26cefeb792698bb4386b00d3bf469632b43a1d0] <==
	* I0601 11:04:07.415430       1 shared_informer.go:247] Caches are synced for ephemeral 
	I0601 11:04:07.415463       1 shared_informer.go:247] Caches are synced for endpoint_slice 
	I0601 11:04:07.418512       1 shared_informer.go:247] Caches are synced for resource quota 
	I0601 11:04:07.461867       1 shared_informer.go:247] Caches are synced for stateful set 
	I0601 11:04:07.464497       1 shared_informer.go:247] Caches are synced for taint 
	I0601 11:04:07.464573       1 node_lifecycle_controller.go:1397] Initializing eviction metric for zone: 
	I0601 11:04:07.464636       1 taint_manager.go:187] "Starting NoExecuteTaintManager"
	I0601 11:04:07.464710       1 event.go:294] "Event occurred" object="embed-certs-20220601110327-6708" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node embed-certs-20220601110327-6708 event: Registered Node embed-certs-20220601110327-6708 in Controller"
	W0601 11:04:07.464641       1 node_lifecycle_controller.go:1012] Missing timestamp for Node embed-certs-20220601110327-6708. Assuming now as a timestamp.
	I0601 11:04:07.464736       1 shared_informer.go:247] Caches are synced for GC 
	I0601 11:04:07.464789       1 node_lifecycle_controller.go:1163] Controller detected that all Nodes are not-Ready. Entering master disruption mode.
	I0601 11:04:07.464794       1 shared_informer.go:247] Caches are synced for ReplicationController 
	I0601 11:04:07.465561       1 shared_informer.go:247] Caches are synced for attach detach 
	I0601 11:04:07.466207       1 shared_informer.go:247] Caches are synced for daemon sets 
	I0601 11:04:07.466846       1 shared_informer.go:247] Caches are synced for ReplicaSet 
	I0601 11:04:07.844776       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0601 11:04:07.860043       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0601 11:04:07.860076       1 garbagecollector.go:155] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0601 11:04:08.021708       1 event.go:294] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-64897985d to 2"
	I0601 11:04:08.045076       1 event.go:294] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-64897985d to 1"
	I0601 11:04:08.122691       1 event.go:294] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-99lsz"
	I0601 11:04:08.125091       1 event.go:294] "Event occurred" object="kube-system/kindnet" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-92tfl"
	I0601 11:04:08.220606       1 event.go:294] "Event occurred" object="kube-system/coredns-64897985d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-64897985d-2ms6r"
	I0601 11:04:08.226533       1 event.go:294] "Event occurred" object="kube-system/coredns-64897985d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-64897985d-9dpfv"
	I0601 11:04:08.241748       1 event.go:294] "Event occurred" object="kube-system/coredns-64897985d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-64897985d-2ms6r"
	
	* 
	* ==> kube-proxy [4a8be0c7cfc5357e854ba7e1f8d7f90fdbacf08e294eb70b849c4501eb4b32b6] <==
	* I0601 11:04:08.785335       1 node.go:163] Successfully retrieved node IP: 192.168.76.2
	I0601 11:04:08.785408       1 server_others.go:138] "Detected node IP" address="192.168.76.2"
	I0601 11:04:08.785447       1 server_others.go:561] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0601 11:04:08.956676       1 server_others.go:206] "Using iptables Proxier"
	I0601 11:04:08.957522       1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0601 11:04:08.957544       1 server_others.go:214] "Creating dualStackProxier for iptables"
	I0601 11:04:08.957576       1 server_others.go:491] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
	I0601 11:04:08.958014       1 server.go:656] "Version info" version="v1.23.6"
	I0601 11:04:08.958572       1 config.go:317] "Starting service config controller"
	I0601 11:04:08.958596       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0601 11:04:08.959266       1 config.go:226] "Starting endpoint slice config controller"
	I0601 11:04:08.959287       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0601 11:04:09.058849       1 shared_informer.go:247] Caches are synced for service config 
	I0601 11:04:09.059356       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	
	* 
	* ==> kube-scheduler [a985029383eb2ce5944970221a3e1c9b33b522c0c97a0fb29c0b4c260cbf6a2f] <==
	* W0601 11:03:52.358283       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0601 11:03:52.358434       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0601 11:03:52.358594       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0601 11:03:52.358832       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0601 11:03:52.358601       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0601 11:03:52.358891       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0601 11:03:52.358710       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0601 11:03:52.358916       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0601 11:03:53.226960       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0601 11:03:53.227001       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0601 11:03:53.235048       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0601 11:03:53.235096       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0601 11:03:53.321811       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0601 11:03:53.321848       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0601 11:03:53.385122       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0601 11:03:53.385163       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0601 11:03:53.405212       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0601 11:03:53.405259       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0601 11:03:53.455747       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0601 11:03:53.455790       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0601 11:03:53.455746       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0601 11:03:53.455816       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0601 11:03:53.557775       1 reflector.go:324] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0601 11:03:53.557818       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0601 11:03:55.783702       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Wed 2022-06-01 11:03:36 UTC, end at Wed 2022-06-01 11:16:14 UTC. --
	Jun 01 11:14:55 embed-certs-20220601110327-6708 kubelet[1320]: E0601 11:14:55.935198    1320 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jun 01 11:15:00 embed-certs-20220601110327-6708 kubelet[1320]: E0601 11:15:00.935843    1320 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jun 01 11:15:05 embed-certs-20220601110327-6708 kubelet[1320]: E0601 11:15:05.936504    1320 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jun 01 11:15:10 embed-certs-20220601110327-6708 kubelet[1320]: E0601 11:15:10.937622    1320 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jun 01 11:15:15 embed-certs-20220601110327-6708 kubelet[1320]: E0601 11:15:15.938561    1320 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jun 01 11:15:20 embed-certs-20220601110327-6708 kubelet[1320]: E0601 11:15:20.939399    1320 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jun 01 11:15:25 embed-certs-20220601110327-6708 kubelet[1320]: E0601 11:15:25.940578    1320 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jun 01 11:15:30 embed-certs-20220601110327-6708 kubelet[1320]: E0601 11:15:30.941588    1320 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jun 01 11:15:32 embed-certs-20220601110327-6708 kubelet[1320]: I0601 11:15:32.984806    1320 scope.go:110] "RemoveContainer" containerID="6adba67c52be0448acfaf806e89c061a9bebadc9090b607e58a012634b901e65"
	Jun 01 11:15:32 embed-certs-20220601110327-6708 kubelet[1320]: I0601 11:15:32.985130    1320 scope.go:110] "RemoveContainer" containerID="f1db42bbd17faeafd4cd3be2854a922ebf474bf60a0d563922769dd22d828187"
	Jun 01 11:15:32 embed-certs-20220601110327-6708 kubelet[1320]: E0601 11:15:32.985444    1320 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kindnet-cni pod=kindnet-92tfl_kube-system(1e2e52a8-4f89-49af-9741-f79384628a29)\"" pod="kube-system/kindnet-92tfl" podUID=1e2e52a8-4f89-49af-9741-f79384628a29
	Jun 01 11:15:35 embed-certs-20220601110327-6708 kubelet[1320]: E0601 11:15:35.942217    1320 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jun 01 11:15:40 embed-certs-20220601110327-6708 kubelet[1320]: E0601 11:15:40.943213    1320 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jun 01 11:15:45 embed-certs-20220601110327-6708 kubelet[1320]: I0601 11:15:45.688928    1320 scope.go:110] "RemoveContainer" containerID="f1db42bbd17faeafd4cd3be2854a922ebf474bf60a0d563922769dd22d828187"
	Jun 01 11:15:45 embed-certs-20220601110327-6708 kubelet[1320]: E0601 11:15:45.689200    1320 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kindnet-cni pod=kindnet-92tfl_kube-system(1e2e52a8-4f89-49af-9741-f79384628a29)\"" pod="kube-system/kindnet-92tfl" podUID=1e2e52a8-4f89-49af-9741-f79384628a29
	Jun 01 11:15:45 embed-certs-20220601110327-6708 kubelet[1320]: E0601 11:15:45.944161    1320 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jun 01 11:15:50 embed-certs-20220601110327-6708 kubelet[1320]: E0601 11:15:50.945767    1320 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jun 01 11:15:55 embed-certs-20220601110327-6708 kubelet[1320]: E0601 11:15:55.947408    1320 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jun 01 11:15:56 embed-certs-20220601110327-6708 kubelet[1320]: I0601 11:15:56.688789    1320 scope.go:110] "RemoveContainer" containerID="f1db42bbd17faeafd4cd3be2854a922ebf474bf60a0d563922769dd22d828187"
	Jun 01 11:15:56 embed-certs-20220601110327-6708 kubelet[1320]: E0601 11:15:56.689204    1320 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kindnet-cni pod=kindnet-92tfl_kube-system(1e2e52a8-4f89-49af-9741-f79384628a29)\"" pod="kube-system/kindnet-92tfl" podUID=1e2e52a8-4f89-49af-9741-f79384628a29
	Jun 01 11:16:00 embed-certs-20220601110327-6708 kubelet[1320]: E0601 11:16:00.948598    1320 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jun 01 11:16:05 embed-certs-20220601110327-6708 kubelet[1320]: E0601 11:16:05.949621    1320 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jun 01 11:16:07 embed-certs-20220601110327-6708 kubelet[1320]: I0601 11:16:07.688654    1320 scope.go:110] "RemoveContainer" containerID="f1db42bbd17faeafd4cd3be2854a922ebf474bf60a0d563922769dd22d828187"
	Jun 01 11:16:07 embed-certs-20220601110327-6708 kubelet[1320]: E0601 11:16:07.688944    1320 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kindnet-cni pod=kindnet-92tfl_kube-system(1e2e52a8-4f89-49af-9741-f79384628a29)\"" pod="kube-system/kindnet-92tfl" podUID=1e2e52a8-4f89-49af-9741-f79384628a29
	Jun 01 11:16:10 embed-certs-20220601110327-6708 kubelet[1320]: E0601 11:16:10.951264    1320 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-20220601110327-6708 -n embed-certs-20220601110327-6708
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-20220601110327-6708 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:270: non-running pods: busybox coredns-64897985d-9dpfv storage-provisioner
helpers_test.go:272: ======> post-mortem[TestStartStop/group/embed-certs/serial/DeployApp]: describe non-running pods <======
helpers_test.go:275: (dbg) Run:  kubectl --context embed-certs-20220601110327-6708 describe pod busybox coredns-64897985d-9dpfv storage-provisioner
helpers_test.go:275: (dbg) Non-zero exit: kubectl --context embed-certs-20220601110327-6708 describe pod busybox coredns-64897985d-9dpfv storage-provisioner: exit status 1 (60.377609ms)

                                                
                                                
-- stdout --
	Name:         busybox
	Namespace:    default
	Priority:     0
	Node:         <none>
	Labels:       integration-test=busybox
	Annotations:  <none>
	Status:       Pending
	IP:           
	IPs:          <none>
	Containers:
	  busybox:
	    Image:      gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sleep
	      3600
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-wgcrb (ro)
	Conditions:
	  Type           Status
	  PodScheduled   False 
	Volumes:
	  kube-api-access-wgcrb:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason            Age                 From               Message
	  ----     ------            ----                ----               -------
	  Warning  FailedScheduling  49s (x8 over 8m5s)  default-scheduler  0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "coredns-64897985d-9dpfv" not found
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:277: kubectl --context embed-certs-20220601110327-6708 describe pod busybox coredns-64897985d-9dpfv storage-provisioner: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/DeployApp (484.46s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/DeployApp (484.43s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/DeployApp
start_stop_delete_test.go:198: (dbg) Run:  kubectl --context default-k8s-different-port-20220601110654-6708 create -f testdata/busybox.yaml
start_stop_delete_test.go:198: (dbg) TestStartStop/group/default-k8s-different-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:342: "busybox" [8adbbda0-694f-40d8-9c0b-7c4e4afc85ac] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/DeployApp
start_stop_delete_test.go:198: ***** TestStartStop/group/default-k8s-different-port/serial/DeployApp: pod "integration-test=busybox" failed to start within 8m0s: timed out waiting for the condition ****
start_stop_delete_test.go:198: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-different-port-20220601110654-6708 -n default-k8s-different-port-20220601110654-6708
start_stop_delete_test.go:198: TestStartStop/group/default-k8s-different-port/serial/DeployApp: showing logs for failed pods as of 2022-06-01 11:19:37.787400757 +0000 UTC m=+3578.787020999
start_stop_delete_test.go:198: (dbg) Run:  kubectl --context default-k8s-different-port-20220601110654-6708 describe po busybox -n default
start_stop_delete_test.go:198: (dbg) kubectl --context default-k8s-different-port-20220601110654-6708 describe po busybox -n default:
Name:         busybox
Namespace:    default
Priority:     0
Node:         <none>
Labels:       integration-test=busybox
Annotations:  <none>
Status:       Pending
IP:           
IPs:          <none>
Containers:
busybox:
Image:      gcr.io/k8s-minikube/busybox:1.28.4-glibc
Port:       <none>
Host Port:  <none>
Command:
sleep
3600
Environment:  <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-c9mjz (ro)
Conditions:
Type           Status
PodScheduled   False 
Volumes:
kube-api-access-c9mjz:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
ConfigMapOptional:       <nil>
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason            Age               From               Message
----     ------            ----              ----               -------
Warning  FailedScheduling  45s (x8 over 8m)  default-scheduler  0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.
start_stop_delete_test.go:198: (dbg) Run:  kubectl --context default-k8s-different-port-20220601110654-6708 logs busybox -n default
start_stop_delete_test.go:198: (dbg) kubectl --context default-k8s-different-port-20220601110654-6708 logs busybox -n default:
start_stop_delete_test.go:198: wait: integration-test=busybox within 8m0s: timed out waiting for the condition
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/default-k8s-different-port/serial/DeployApp]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect default-k8s-different-port-20220601110654-6708
helpers_test.go:235: (dbg) docker inspect default-k8s-different-port-20220601110654-6708:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "dccf9935a74c10fb8fa207fbc849bba86fe9f8dff98b2051cc49fbfa90e4ec8b",
	        "Created": "2022-06-01T11:07:03.290503902Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 245161,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-06-01T11:07:03.630929291Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:5fc9565d342f677dd8987c0c7656d8d58147ab45c932c9076935b38a770e4cf7",
	        "ResolvConfPath": "/var/lib/docker/containers/dccf9935a74c10fb8fa207fbc849bba86fe9f8dff98b2051cc49fbfa90e4ec8b/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/dccf9935a74c10fb8fa207fbc849bba86fe9f8dff98b2051cc49fbfa90e4ec8b/hostname",
	        "HostsPath": "/var/lib/docker/containers/dccf9935a74c10fb8fa207fbc849bba86fe9f8dff98b2051cc49fbfa90e4ec8b/hosts",
	        "LogPath": "/var/lib/docker/containers/dccf9935a74c10fb8fa207fbc849bba86fe9f8dff98b2051cc49fbfa90e4ec8b/dccf9935a74c10fb8fa207fbc849bba86fe9f8dff98b2051cc49fbfa90e4ec8b-json.log",
	        "Name": "/default-k8s-different-port-20220601110654-6708",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-different-port-20220601110654-6708:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default-k8s-different-port-20220601110654-6708",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/ec30b29d13ab7f3810bf40e3fa416d096637b34a6f7b5a750bd7d391c0a4008e-init/diff:/var/lib/docker/overlay2/b8d8db1a2aa7e68bc30cc8784920412f67de75d287f26f41df14fe77e9cd01aa/diff:/var/lib/docker/overlay2/e05914bc392f34941f075ddf1f08984ba653e41980a3a12b3f0ec7bc66faed2c/diff:/var/lib/docker/overlay2/311290be1e68cfaf248422398f2ee037b662e83e8a80c393cf1f35af46c871e3/diff:/var/lib/docker/overlay2/df5bd86ed0e0c9740e1a389f0dd435837b730043ff123ee198aa28f82511f21c/diff:/var/lib/docker/overlay2/52ebf0baed5a1f4378d84fd6b94c9c1756fef7c7837d0766f77ef29b2104f39c/diff:/var/lib/docker/overlay2/19fdc64f1948c51309fb85761a84e7787a06f91f43e3d554c0507c5291bda802/diff:/var/lib/docker/overlay2/8fbe94a5f5ec7e7b87e3294d917aaba04c0d5a1072a4876ca3171f7b416e78d1/diff:/var/lib/docker/overlay2/3f2aec9d3b6e64c9251bd84e36639908a944a57741768984d5a66f5f34ed2288/diff:/var/lib/docker/overlay2/f4b59ae0a52d88a0325a281689c43f769408e14f70b9c94b771e70af140d7538/diff:/var/lib/docker/overlay2/1b9610
0ef86fc38fba7ce796c89f6a0f0c16c7fc1a94ff1a7f2021a01dd5471e/diff:/var/lib/docker/overlay2/10388fac28961ac0e67628b98602c8c77b04d12b21cd25a8d9adc05c1261252b/diff:/var/lib/docker/overlay2/efcc44ba0e0e6acd23e74db49106211785c2519f22b858252b43b17e927095e4/diff:/var/lib/docker/overlay2/3dbc9b9ec4e689c631fadb88ec67ad1f44f1059642f0d9e9740fa4523681476a/diff:/var/lib/docker/overlay2/196519dbdfaceb51fe77bd0517b4a726c63133e2f1a44cc71539d859f920996f/diff:/var/lib/docker/overlay2/bf269014ddb2ded291bc1f7a299a168567e8ffb016d7d4ba5ad3681eb1c988ba/diff:/var/lib/docker/overlay2/dc3a1c86dd14e2cba77aa4a9424d61aa34efc817c2c14b8f79d66fb1c19132ea/diff:/var/lib/docker/overlay2/7151cfa81a30a8e2bacf0e7e97eb21e825844e9ee99d4d774c443cf4df3042bf/diff:/var/lib/docker/overlay2/4827c7b9079e215b353e51dd539d7e28bf73341ea0a494df4654e9fd1b53d16c/diff:/var/lib/docker/overlay2/2da0eda04306aaeacd19a5cc85b885230b88e74f1791bdb022ebf4b39d85fae2/diff:/var/lib/docker/overlay2/1a0bdf8971fb0a44ff0e168623dbbb2d84b8e93f6d20d9351efd68470e4d4851/diff:/var/lib/d
ocker/overlay2/9ced6f582fc4ce00064f1f5e6ac922d838648fe94afc8e015c309e04004f10ca/diff:/var/lib/docker/overlay2/dd6d4c3166eb565aff2516953d5a8877a204214f2436d414475132ae70429cf7/diff:/var/lib/docker/overlay2/a1ace060e85891d54b26ff4be9fcbce36ffbb15cc4061eb4ccf0add8f82783df/diff:/var/lib/docker/overlay2/bc8b93bfba93e7da2c573ae8b6405ebff526153f6a8b0659aebaf044dc7e8f43/diff:/var/lib/docker/overlay2/c6292624b658b5761ddc277e4b50f1bd9d32fb9a2ad4a01647d6482fa0d29eb3/diff:/var/lib/docker/overlay2/cfe8f35eeb0a80a3c747eac0dfd9195da1fa3d9f92f0b7866d46f3517f3b10ee/diff:/var/lib/docker/overlay2/90bc3b9378b5761de7575f1a82d48e4b4ebf50af153eafa5a565585e136b87f8/diff:/var/lib/docker/overlay2/e1b928a2483870df7fdf4adb3b4002f9effe1db7fbff925b24005f47290b2915/diff:/var/lib/docker/overlay2/4758c75ab63fd3f43ae7b654bc93566ded69acea0e92caf3043ef6eeeec9ca1b/diff:/var/lib/docker/overlay2/8558da5809877030d87982052e5011fb04b8bb9646d0f3c1d4aa10f2d7926592/diff:/var/lib/docker/overlay2/f6400cfae811f55736f55064c8949e18ac2dc1175a5149bb0382a79fa92
4f3f3/diff:/var/lib/docker/overlay2/875d5278ff8445b84261d4586c6a14fbbd9c13beff1fe9252591162e4d91a0dc/diff:/var/lib/docker/overlay2/12d9f85a229e1386e37fb609020fdcb5535c67ce811012fd646182559e4ee754/diff:/var/lib/docker/overlay2/d5c5b85272d7a8b0b62da488c8cba8d883a0adcfc9f1b2f6ad2f856b4f13e5f7/diff:/var/lib/docker/overlay2/5f6feb9e059c22491e287804f38f097fda984dc9376ba28ae810e13dcf27394f/diff:/var/lib/docker/overlay2/113a715b1135f09b959295e960aeaa36846ad54a6fe46fdd53d061bc3fe114e3/diff:/var/lib/docker/overlay2/265096a57a98130b8073aa41d0a22f0ada5a391943e490ac1c281a634e22cba0/diff:/var/lib/docker/overlay2/15f57e6dc9a6b382240a9335aae067454048b2eb6d0094f2d3c8c115be34311a/diff:/var/lib/docker/overlay2/45ca8d44d46a41b4493f52c0048d08b8d5ff4c1b28233ab8940f6422df1f1273/diff:/var/lib/docker/overlay2/d555005a13c251ef928389a1b06d251a17378cf3ec68af5a4d6c849412d3f69f/diff:/var/lib/docker/overlay2/80b8c06850dacfe2ca4db4e0fa4d2d0dd6997cf495a7f7da9b95a996694144c5/diff",
	                "MergedDir": "/var/lib/docker/overlay2/ec30b29d13ab7f3810bf40e3fa416d096637b34a6f7b5a750bd7d391c0a4008e/merged",
	                "UpperDir": "/var/lib/docker/overlay2/ec30b29d13ab7f3810bf40e3fa416d096637b34a6f7b5a750bd7d391c0a4008e/diff",
	                "WorkDir": "/var/lib/docker/overlay2/ec30b29d13ab7f3810bf40e3fa416d096637b34a6f7b5a750bd7d391c0a4008e/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-different-port-20220601110654-6708",
	                "Source": "/var/lib/docker/volumes/default-k8s-different-port-20220601110654-6708/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-different-port-20220601110654-6708",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-different-port-20220601110654-6708",
	                "name.minikube.sigs.k8s.io": "default-k8s-different-port-20220601110654-6708",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "7855192596bd9f60fe4ad2cd96f599cd40d7bd62bfad35d8e1f5a897e3270f06",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49417"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49416"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49413"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49415"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49414"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/7855192596bd",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-different-port-20220601110654-6708": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "dccf9935a74c",
	                        "default-k8s-different-port-20220601110654-6708"
	                    ],
	                    "NetworkID": "7d52ef0dc0855b59c05da2e66b25f4d0866ad1d653be1fa615e193dd86443771",
	                    "EndpointID": "333c0952bde2fd448463a8d5d563d8e8c8448f605be2cf7fffa411011fe20066",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-different-port-20220601110654-6708 -n default-k8s-different-port-20220601110654-6708
helpers_test.go:244: <<< TestStartStop/group/default-k8s-different-port/serial/DeployApp FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-different-port/serial/DeployApp]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-different-port-20220601110654-6708 logs -n 25
helpers_test.go:252: TestStartStop/group/default-k8s-different-port/serial/DeployApp logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------------------------------|------------------------------------------------|---------|----------------|---------------------|---------------------|
	| Command |                            Args                            |                    Profile                     |  User   |    Version     |     Start Time      |      End Time       |
	|---------|------------------------------------------------------------|------------------------------------------------|---------|----------------|---------------------|---------------------|
	| delete  | -p                                                         | disable-driver-mounts-20220601110654-6708      | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:06 UTC | 01 Jun 22 11:06 UTC |
	|         | disable-driver-mounts-20220601110654-6708                  |                                                |         |                |                     |                     |
	| logs    | embed-certs-20220601110327-6708                            | embed-certs-20220601110327-6708                | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:08 UTC | 01 Jun 22 11:08 UTC |
	|         | logs -n 25                                                 |                                                |         |                |                     |                     |
	| logs    | default-k8s-different-port-20220601110654-6708             | default-k8s-different-port-20220601110654-6708 | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:11 UTC | 01 Jun 22 11:11 UTC |
	|         | logs -n 25                                                 |                                                |         |                |                     |                     |
	| logs    | old-k8s-version-20220601105850-6708                        | old-k8s-version-20220601105850-6708            | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:11 UTC | 01 Jun 22 11:11 UTC |
	|         | logs -n 25                                                 |                                                |         |                |                     |                     |
	| logs    | old-k8s-version-20220601105850-6708                        | old-k8s-version-20220601105850-6708            | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:11 UTC | 01 Jun 22 11:11 UTC |
	|         | logs -n 25                                                 |                                                |         |                |                     |                     |
	| addons  | enable metrics-server -p                                   | old-k8s-version-20220601105850-6708            | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:11 UTC | 01 Jun 22 11:11 UTC |
	|         | old-k8s-version-20220601105850-6708                        |                                                |         |                |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                                                |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                     |                                                |         |                |                     |                     |
	| stop    | -p                                                         | old-k8s-version-20220601105850-6708            | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:11 UTC | 01 Jun 22 11:11 UTC |
	|         | old-k8s-version-20220601105850-6708                        |                                                |         |                |                     |                     |
	|         | --alsologtostderr -v=3                                     |                                                |         |                |                     |                     |
	| addons  | enable dashboard -p                                        | old-k8s-version-20220601105850-6708            | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:11 UTC | 01 Jun 22 11:11 UTC |
	|         | old-k8s-version-20220601105850-6708                        |                                                |         |                |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                |         |                |                     |                     |
	| logs    | calico-20220601104839-6708                                 | calico-20220601104839-6708                     | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:14 UTC | 01 Jun 22 11:14 UTC |
	|         | logs -n 25                                                 |                                                |         |                |                     |                     |
	| delete  | -p calico-20220601104839-6708                              | calico-20220601104839-6708                     | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:14 UTC | 01 Jun 22 11:14 UTC |
	| start   | -p newest-cni-20220601111420-6708 --memory=2200            | newest-cni-20220601111420-6708                 | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:14 UTC | 01 Jun 22 11:15 UTC |
	|         | --alsologtostderr --wait=apiserver,system_pods,default_sa  |                                                |         |                |                     |                     |
	|         | --feature-gates ServerSideApply=true --network-plugin=cni  |                                                |         |                |                     |                     |
	|         | --extra-config=kubelet.network-plugin=cni                  |                                                |         |                |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |                                                |         |                |                     |                     |
	|         | --driver=docker  --container-runtime=containerd            |                                                |         |                |                     |                     |
	|         | --kubernetes-version=v1.23.6                               |                                                |         |                |                     |                     |
	| addons  | enable metrics-server -p                                   | newest-cni-20220601111420-6708                 | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:15 UTC | 01 Jun 22 11:15 UTC |
	|         | newest-cni-20220601111420-6708                             |                                                |         |                |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                                                |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                     |                                                |         |                |                     |                     |
	| stop    | -p                                                         | newest-cni-20220601111420-6708                 | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:15 UTC | 01 Jun 22 11:15 UTC |
	|         | newest-cni-20220601111420-6708                             |                                                |         |                |                     |                     |
	|         | --alsologtostderr -v=3                                     |                                                |         |                |                     |                     |
	| addons  | enable dashboard -p                                        | newest-cni-20220601111420-6708                 | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:15 UTC | 01 Jun 22 11:15 UTC |
	|         | newest-cni-20220601111420-6708                             |                                                |         |                |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                |         |                |                     |                     |
	| start   | -p newest-cni-20220601111420-6708 --memory=2200            | newest-cni-20220601111420-6708                 | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:15 UTC | 01 Jun 22 11:15 UTC |
	|         | --alsologtostderr --wait=apiserver,system_pods,default_sa  |                                                |         |                |                     |                     |
	|         | --feature-gates ServerSideApply=true --network-plugin=cni  |                                                |         |                |                     |                     |
	|         | --extra-config=kubelet.network-plugin=cni                  |                                                |         |                |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |                                                |         |                |                     |                     |
	|         | --driver=docker  --container-runtime=containerd            |                                                |         |                |                     |                     |
	|         | --kubernetes-version=v1.23.6                               |                                                |         |                |                     |                     |
	| ssh     | -p                                                         | newest-cni-20220601111420-6708                 | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:15 UTC | 01 Jun 22 11:15 UTC |
	|         | newest-cni-20220601111420-6708                             |                                                |         |                |                     |                     |
	|         | sudo crictl images -o json                                 |                                                |         |                |                     |                     |
	| pause   | -p                                                         | newest-cni-20220601111420-6708                 | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:15 UTC | 01 Jun 22 11:15 UTC |
	|         | newest-cni-20220601111420-6708                             |                                                |         |                |                     |                     |
	|         | --alsologtostderr -v=1                                     |                                                |         |                |                     |                     |
	| unpause | -p                                                         | newest-cni-20220601111420-6708                 | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:15 UTC | 01 Jun 22 11:16 UTC |
	|         | newest-cni-20220601111420-6708                             |                                                |         |                |                     |                     |
	|         | --alsologtostderr -v=1                                     |                                                |         |                |                     |                     |
	| delete  | -p                                                         | newest-cni-20220601111420-6708                 | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:16 UTC | 01 Jun 22 11:16 UTC |
	|         | newest-cni-20220601111420-6708                             |                                                |         |                |                     |                     |
	| delete  | -p                                                         | newest-cni-20220601111420-6708                 | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:16 UTC | 01 Jun 22 11:16 UTC |
	|         | newest-cni-20220601111420-6708                             |                                                |         |                |                     |                     |
	| logs    | embed-certs-20220601110327-6708                            | embed-certs-20220601110327-6708                | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:16 UTC | 01 Jun 22 11:16 UTC |
	|         | logs -n 25                                                 |                                                |         |                |                     |                     |
	| logs    | embed-certs-20220601110327-6708                            | embed-certs-20220601110327-6708                | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:16 UTC | 01 Jun 22 11:16 UTC |
	|         | logs -n 25                                                 |                                                |         |                |                     |                     |
	| addons  | enable metrics-server -p                                   | embed-certs-20220601110327-6708                | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:16 UTC | 01 Jun 22 11:16 UTC |
	|         | embed-certs-20220601110327-6708                            |                                                |         |                |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                                                |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                     |                                                |         |                |                     |                     |
	| stop    | -p                                                         | embed-certs-20220601110327-6708                | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:16 UTC | 01 Jun 22 11:16 UTC |
	|         | embed-certs-20220601110327-6708                            |                                                |         |                |                     |                     |
	|         | --alsologtostderr -v=3                                     |                                                |         |                |                     |                     |
	| addons  | enable dashboard -p                                        | embed-certs-20220601110327-6708                | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:16 UTC | 01 Jun 22 11:16 UTC |
	|         | embed-certs-20220601110327-6708                            |                                                |         |                |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                |         |                |                     |                     |
	|---------|------------------------------------------------------------|------------------------------------------------|---------|----------------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/06/01 11:16:27
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.18.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0601 11:16:27.030025  270029 out.go:296] Setting OutFile to fd 1 ...
	I0601 11:16:27.030200  270029 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0601 11:16:27.030210  270029 out.go:309] Setting ErrFile to fd 2...
	I0601 11:16:27.030214  270029 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0601 11:16:27.030316  270029 root.go:322] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/bin
	I0601 11:16:27.030590  270029 out.go:303] Setting JSON to false
	I0601 11:16:27.032104  270029 start.go:115] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":3541,"bootTime":1654078646,"procs":726,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.13.0-1027-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0601 11:16:27.032160  270029 start.go:125] virtualization: kvm guest
	I0601 11:16:27.034601  270029 out.go:177] * [embed-certs-20220601110327-6708] minikube v1.26.0-beta.1 on Ubuntu 20.04 (kvm/amd64)
	I0601 11:16:27.036027  270029 out.go:177]   - MINIKUBE_LOCATION=14079
	I0601 11:16:27.035970  270029 notify.go:193] Checking for updates...
	I0601 11:16:27.037352  270029 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0601 11:16:27.038882  270029 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig
	I0601 11:16:27.040231  270029 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube
	I0601 11:16:27.041542  270029 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0601 11:16:27.043240  270029 config.go:178] Loaded profile config "embed-certs-20220601110327-6708": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.6
	I0601 11:16:27.043659  270029 driver.go:358] Setting default libvirt URI to qemu:///system
	I0601 11:16:27.081227  270029 docker.go:137] docker version: linux-20.10.16
	I0601 11:16:27.081310  270029 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0601 11:16:27.182938  270029 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:76 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:44 OomKillDisable:true NGoroutines:44 SystemTime:2022-06-01 11:16:27.109556043 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1027-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662795776 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:20.10.16 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0601 11:16:27.183042  270029 docker.go:254] overlay module found
	I0601 11:16:27.185912  270029 out.go:177] * Using the docker driver based on existing profile
	I0601 11:16:27.187159  270029 start.go:284] selected driver: docker
	I0601 11:16:27.187172  270029 start.go:806] validating driver "docker" against &{Name:embed-certs-20220601110327-6708 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:embed-certs-20220601110327-6708 Namespace:d
efault APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.5.1@sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTim
eout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0601 11:16:27.187275  270029 start.go:817] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0601 11:16:27.188164  270029 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0601 11:16:27.287572  270029 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:76 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:44 OomKillDisable:true NGoroutines:44 SystemTime:2022-06-01 11:16:27.216523745 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1027-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662795776 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:20.10.16 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0601 11:16:27.287846  270029 start_flags.go:847] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0601 11:16:27.287899  270029 cni.go:95] Creating CNI manager for ""
	I0601 11:16:27.287909  270029 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0601 11:16:27.287923  270029 start_flags.go:306] config:
	{Name:embed-certs-20220601110327-6708 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:embed-certs-20220601110327-6708 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:clus
ter.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.5.1@sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: Mu
ltiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0601 11:16:27.290349  270029 out.go:177] * Starting control plane node embed-certs-20220601110327-6708 in cluster embed-certs-20220601110327-6708
	I0601 11:16:27.291691  270029 cache.go:120] Beginning downloading kic base image for docker with containerd
	I0601 11:16:27.292997  270029 out.go:177] * Pulling base image ...
	I0601 11:16:27.294363  270029 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime containerd
	I0601 11:16:27.294386  270029 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a in local docker daemon
	I0601 11:16:27.294393  270029 preload.go:148] Found local preload: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.6-containerd-overlay2-amd64.tar.lz4
	I0601 11:16:27.294434  270029 cache.go:57] Caching tarball of preloaded images
	I0601 11:16:27.295098  270029 preload.go:174] Found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.6-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0601 11:16:27.295161  270029 cache.go:60] Finished verifying existence of preloaded tar for  v1.23.6 on containerd
	I0601 11:16:27.295359  270029 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/embed-certs-20220601110327-6708/config.json ...
	I0601 11:16:27.338028  270029 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a in local docker daemon, skipping pull
	I0601 11:16:27.338057  270029 cache.go:141] gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a exists in daemon, skipping load
	I0601 11:16:27.338077  270029 cache.go:206] Successfully downloaded all kic artifacts
	I0601 11:16:27.338121  270029 start.go:352] acquiring machines lock for embed-certs-20220601110327-6708: {Name:mk2bc8f54b3ac1967b6e5e724f1be8808370dc1a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0601 11:16:27.338232  270029 start.go:356] acquired machines lock for "embed-certs-20220601110327-6708" in 83.619µs
	I0601 11:16:27.338252  270029 start.go:94] Skipping create...Using existing machine configuration
	I0601 11:16:27.338262  270029 fix.go:55] fixHost starting: 
	I0601 11:16:27.338520  270029 cli_runner.go:164] Run: docker container inspect embed-certs-20220601110327-6708 --format={{.State.Status}}
	I0601 11:16:27.369415  270029 fix.go:103] recreateIfNeeded on embed-certs-20220601110327-6708: state=Stopped err=<nil>
	W0601 11:16:27.369444  270029 fix.go:129] unexpected machine state, will restart: <nil>
	I0601 11:16:27.371758  270029 out.go:177] * Restarting existing docker container for "embed-certs-20220601110327-6708" ...
	I0601 11:16:24.354878  254820 pod_ready.go:102] pod "coredns-5644d7b6d9-5z28m" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 10:59:44 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:16:26.855526  254820 pod_ready.go:102] pod "coredns-5644d7b6d9-5z28m" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 10:59:44 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:16:27.373224  270029 cli_runner.go:164] Run: docker start embed-certs-20220601110327-6708
	I0601 11:16:27.750544  270029 cli_runner.go:164] Run: docker container inspect embed-certs-20220601110327-6708 --format={{.State.Status}}
	I0601 11:16:27.784421  270029 kic.go:416] container "embed-certs-20220601110327-6708" state is running.
	I0601 11:16:27.784842  270029 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-20220601110327-6708
	I0601 11:16:27.816168  270029 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/embed-certs-20220601110327-6708/config.json ...
	I0601 11:16:27.816441  270029 machine.go:88] provisioning docker machine ...
	I0601 11:16:27.816482  270029 ubuntu.go:169] provisioning hostname "embed-certs-20220601110327-6708"
	I0601 11:16:27.816529  270029 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220601110327-6708
	I0601 11:16:27.849760  270029 main.go:134] libmachine: Using SSH client type: native
	I0601 11:16:27.849917  270029 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7da240] 0x7dd2a0 <nil>  [] 0s} 127.0.0.1 49437 <nil> <nil>}
	I0601 11:16:27.849935  270029 main.go:134] libmachine: About to run SSH command:
	sudo hostname embed-certs-20220601110327-6708 && echo "embed-certs-20220601110327-6708" | sudo tee /etc/hostname
	I0601 11:16:27.850521  270029 main.go:134] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:54016->127.0.0.1:49437: read: connection reset by peer
	I0601 11:16:30.976432  270029 main.go:134] libmachine: SSH cmd err, output: <nil>: embed-certs-20220601110327-6708
	
	I0601 11:16:30.976514  270029 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220601110327-6708
	I0601 11:16:31.008861  270029 main.go:134] libmachine: Using SSH client type: native
	I0601 11:16:31.009014  270029 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7da240] 0x7dd2a0 <nil>  [] 0s} 127.0.0.1 49437 <nil> <nil>}
	I0601 11:16:31.009044  270029 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-20220601110327-6708' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-20220601110327-6708/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-20220601110327-6708' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0601 11:16:31.123496  270029 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0601 11:16:31.123529  270029 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/c
erts/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube}
	I0601 11:16:31.123570  270029 ubuntu.go:177] setting up certificates
	I0601 11:16:31.123582  270029 provision.go:83] configureAuth start
	I0601 11:16:31.123653  270029 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-20220601110327-6708
	I0601 11:16:31.154648  270029 provision.go:138] copyHostCerts
	I0601 11:16:31.154711  270029 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.pem, removing ...
	I0601 11:16:31.154718  270029 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.pem
	I0601 11:16:31.154779  270029 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.pem (1082 bytes)
	I0601 11:16:31.154874  270029 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cert.pem, removing ...
	I0601 11:16:31.154884  270029 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cert.pem
	I0601 11:16:31.154907  270029 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cert.pem (1123 bytes)
	I0601 11:16:31.155010  270029 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/key.pem, removing ...
	I0601 11:16:31.155022  270029 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/key.pem
	I0601 11:16:31.155045  270029 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/key.pem (1679 bytes)
	I0601 11:16:31.155086  270029 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca-key.pem org=jenkins.embed-certs-20220601110327-6708 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube embed-certs-20220601110327-6708]
	I0601 11:16:31.392219  270029 provision.go:172] copyRemoteCerts
	I0601 11:16:31.392269  270029 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0601 11:16:31.392296  270029 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220601110327-6708
	I0601 11:16:31.424693  270029 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49437 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/embed-certs-20220601110327-6708/id_rsa Username:docker}
	I0601 11:16:31.507177  270029 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0601 11:16:31.523691  270029 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/server.pem --> /etc/docker/server.pem (1265 bytes)
	I0601 11:16:31.539588  270029 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0601 11:16:31.556391  270029 provision.go:86] duration metric: configureAuth took 432.782419ms
	I0601 11:16:31.556423  270029 ubuntu.go:193] setting minikube options for container-runtime
	I0601 11:16:31.556601  270029 config.go:178] Loaded profile config "embed-certs-20220601110327-6708": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.6
	I0601 11:16:31.556613  270029 machine.go:91] provisioned docker machine in 3.740153286s
	I0601 11:16:31.556620  270029 start.go:306] post-start starting for "embed-certs-20220601110327-6708" (driver="docker")
	I0601 11:16:31.556627  270029 start.go:316] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0601 11:16:31.556665  270029 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0601 11:16:31.556708  270029 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220601110327-6708
	I0601 11:16:31.588692  270029 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49437 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/embed-certs-20220601110327-6708/id_rsa Username:docker}
	I0601 11:16:31.671170  270029 ssh_runner.go:195] Run: cat /etc/os-release
	I0601 11:16:31.673879  270029 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0601 11:16:31.673904  270029 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0601 11:16:31.673913  270029 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0601 11:16:31.673921  270029 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0601 11:16:31.673932  270029 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/addons for local assets ...
	I0601 11:16:31.673995  270029 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files for local assets ...
	I0601 11:16:31.674092  270029 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files/etc/ssl/certs/67082.pem -> 67082.pem in /etc/ssl/certs
	I0601 11:16:31.674203  270029 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0601 11:16:31.680491  270029 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files/etc/ssl/certs/67082.pem --> /etc/ssl/certs/67082.pem (1708 bytes)
	I0601 11:16:31.696768  270029 start.go:309] post-start completed in 140.137646ms
	I0601 11:16:31.696823  270029 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0601 11:16:31.696867  270029 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220601110327-6708
	I0601 11:16:31.728967  270029 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49437 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/embed-certs-20220601110327-6708/id_rsa Username:docker}
	I0601 11:16:31.808592  270029 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0601 11:16:31.813696  270029 fix.go:57] fixHost completed within 4.475428594s
	I0601 11:16:31.813724  270029 start.go:81] releasing machines lock for "embed-certs-20220601110327-6708", held for 4.475478152s
	I0601 11:16:31.813806  270029 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-20220601110327-6708
	I0601 11:16:31.845390  270029 ssh_runner.go:195] Run: systemctl --version
	I0601 11:16:31.845426  270029 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0601 11:16:31.845445  270029 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220601110327-6708
	I0601 11:16:31.845474  270029 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220601110327-6708
	I0601 11:16:31.878841  270029 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49437 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/embed-certs-20220601110327-6708/id_rsa Username:docker}
	I0601 11:16:31.879529  270029 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49437 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/embed-certs-20220601110327-6708/id_rsa Username:docker}
	I0601 11:16:31.984532  270029 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0601 11:16:31.995279  270029 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0601 11:16:32.004188  270029 docker.go:187] disabling docker service ...
	I0601 11:16:32.004230  270029 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0601 11:16:32.013110  270029 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0601 11:16:32.021544  270029 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0601 11:16:29.355547  254820 pod_ready.go:102] pod "coredns-5644d7b6d9-5z28m" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 10:59:44 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:16:31.855558  254820 pod_ready.go:102] pod "coredns-5644d7b6d9-5z28m" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 10:59:44 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:16:32.096568  270029 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0601 11:16:32.177406  270029 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0601 11:16:32.186287  270029 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0601 11:16:32.198554  270029 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*sandbox_image = .*$|sandbox_image = "k8s.gcr.io/pause:3.6"|' -i /etc/containerd/config.toml"
	I0601 11:16:32.206479  270029 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*restrict_oom_score_adj = .*$|restrict_oom_score_adj = false|' -i /etc/containerd/config.toml"
	I0601 11:16:32.214298  270029 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*SystemdCgroup = .*$|SystemdCgroup = false|' -i /etc/containerd/config.toml"
	I0601 11:16:32.221739  270029 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*conf_dir = .*$|conf_dir = "/etc/cni/net.mk"|' -i /etc/containerd/config.toml"
	I0601 11:16:32.229090  270029 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^# imports|imports = ["/etc/containerd/containerd.conf.d/02-containerd.conf"]|' -i /etc/containerd/config.toml"
	I0601 11:16:32.236531  270029 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc/containerd/containerd.conf.d && printf %!s(MISSING) "dmVyc2lvbiA9IDIK" | base64 -d | sudo tee /etc/containerd/containerd.conf.d/02-containerd.conf"
	I0601 11:16:32.248478  270029 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0601 11:16:32.254712  270029 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0601 11:16:32.260784  270029 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0601 11:16:32.332262  270029 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0601 11:16:32.400990  270029 start.go:447] Will wait 60s for socket path /run/containerd/containerd.sock
	I0601 11:16:32.401055  270029 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0601 11:16:32.405246  270029 start.go:468] Will wait 60s for crictl version
	I0601 11:16:32.405339  270029 ssh_runner.go:195] Run: sudo crictl version
	I0601 11:16:32.431671  270029 retry.go:31] will retry after 11.04660288s: Temporary Error: sudo crictl version: Process exited with status 1
	stdout:
	
	stderr:
	time="2022-06-01T11:16:32Z" level=fatal msg="getting the runtime version: rpc error: code = Unknown desc = server is not initialized yet"
	I0601 11:16:33.855768  254820 pod_ready.go:102] pod "coredns-5644d7b6d9-5z28m" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 10:59:44 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:16:35.855971  254820 pod_ready.go:102] pod "coredns-5644d7b6d9-5z28m" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 10:59:44 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:16:38.355081  254820 pod_ready.go:102] pod "coredns-5644d7b6d9-5z28m" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 10:59:44 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:16:43.479123  270029 ssh_runner.go:195] Run: sudo crictl version
	I0601 11:16:43.501672  270029 start.go:477] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.6.4
	RuntimeApiVersion:  v1alpha2
	I0601 11:16:43.501721  270029 ssh_runner.go:195] Run: containerd --version
	I0601 11:16:43.529392  270029 ssh_runner.go:195] Run: containerd --version
	I0601 11:16:43.558583  270029 out.go:177] * Preparing Kubernetes v1.23.6 on containerd 1.6.4 ...
	I0601 11:16:43.560125  270029 cli_runner.go:164] Run: docker network inspect embed-certs-20220601110327-6708 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0601 11:16:43.591406  270029 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I0601 11:16:43.594609  270029 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0601 11:16:43.605543  270029 out.go:177]   - kubelet.cni-conf-dir=/etc/cni/net.mk
	I0601 11:16:40.355331  254820 pod_ready.go:102] pod "coredns-5644d7b6d9-5z28m" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 10:59:44 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:16:42.855945  254820 pod_ready.go:102] pod "coredns-5644d7b6d9-5z28m" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 10:59:44 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:16:43.607033  270029 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime containerd
	I0601 11:16:43.607086  270029 ssh_runner.go:195] Run: sudo crictl images --output json
	I0601 11:16:43.629330  270029 containerd.go:547] all images are preloaded for containerd runtime.
	I0601 11:16:43.629349  270029 containerd.go:461] Images already preloaded, skipping extraction
	I0601 11:16:43.629396  270029 ssh_runner.go:195] Run: sudo crictl images --output json
	I0601 11:16:43.651491  270029 containerd.go:547] all images are preloaded for containerd runtime.
	I0601 11:16:43.651512  270029 cache_images.go:84] Images are preloaded, skipping loading
	I0601 11:16:43.651566  270029 ssh_runner.go:195] Run: sudo crictl info
	I0601 11:16:43.675463  270029 cni.go:95] Creating CNI manager for ""
	I0601 11:16:43.675488  270029 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0601 11:16:43.675505  270029 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0601 11:16:43.675522  270029 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.23.6 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-20220601110327-6708 NodeName:embed-certs-20220601110327-6708 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientC
AFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0601 11:16:43.675702  270029 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "embed-certs-20220601110327-6708"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.23.6
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0601 11:16:43.675851  270029 kubeadm.go:961] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.23.6/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cni-conf-dir=/etc/cni/net.mk --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=embed-certs-20220601110327-6708 --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.76.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.23.6 ClusterName:embed-certs-20220601110327-6708 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0601 11:16:43.675928  270029 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.23.6
	I0601 11:16:43.682788  270029 binaries.go:44] Found k8s binaries, skipping transfer
	I0601 11:16:43.682841  270029 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0601 11:16:43.689239  270029 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (576 bytes)
	I0601 11:16:43.701365  270029 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0601 11:16:43.712899  270029 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2060 bytes)
	I0601 11:16:43.724782  270029 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0601 11:16:43.727472  270029 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0601 11:16:43.736002  270029 certs.go:54] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/embed-certs-20220601110327-6708 for IP: 192.168.76.2
	I0601 11:16:43.736086  270029 certs.go:182] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.key
	I0601 11:16:43.736130  270029 certs.go:182] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/proxy-client-ca.key
	I0601 11:16:43.736196  270029 certs.go:298] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/embed-certs-20220601110327-6708/client.key
	I0601 11:16:43.736241  270029 certs.go:298] skipping minikube signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/embed-certs-20220601110327-6708/apiserver.key.31bdca25
	I0601 11:16:43.736273  270029 certs.go:298] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/embed-certs-20220601110327-6708/proxy-client.key
	I0601 11:16:43.736370  270029 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/6708.pem (1338 bytes)
	W0601 11:16:43.736396  270029 certs.go:384] ignoring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/6708_empty.pem, impossibly tiny 0 bytes
	I0601 11:16:43.736408  270029 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca-key.pem (1679 bytes)
	I0601 11:16:43.736433  270029 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca.pem (1082 bytes)
	I0601 11:16:43.736458  270029 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/cert.pem (1123 bytes)
	I0601 11:16:43.736488  270029 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/key.pem (1679 bytes)
	I0601 11:16:43.736535  270029 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files/etc/ssl/certs/67082.pem (1708 bytes)
	I0601 11:16:43.737038  270029 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/embed-certs-20220601110327-6708/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0601 11:16:43.753252  270029 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/embed-certs-20220601110327-6708/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0601 11:16:43.769071  270029 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/embed-certs-20220601110327-6708/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0601 11:16:43.785137  270029 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/embed-certs-20220601110327-6708/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0601 11:16:43.800815  270029 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0601 11:16:43.816567  270029 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0601 11:16:43.832435  270029 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0601 11:16:43.848147  270029 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0601 11:16:43.864438  270029 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0601 11:16:43.880361  270029 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/6708.pem --> /usr/share/ca-certificates/6708.pem (1338 bytes)
	I0601 11:16:43.896362  270029 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files/etc/ssl/certs/67082.pem --> /usr/share/ca-certificates/67082.pem (1708 bytes)
	I0601 11:16:43.912480  270029 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0601 11:16:43.924191  270029 ssh_runner.go:195] Run: openssl version
	I0601 11:16:43.928562  270029 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/67082.pem && ln -fs /usr/share/ca-certificates/67082.pem /etc/ssl/certs/67082.pem"
	I0601 11:16:43.935311  270029 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/67082.pem
	I0601 11:16:43.938057  270029 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Jun  1 10:24 /usr/share/ca-certificates/67082.pem
	I0601 11:16:43.938091  270029 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/67082.pem
	I0601 11:16:43.942508  270029 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/67082.pem /etc/ssl/certs/3ec20f2e.0"
	I0601 11:16:43.948891  270029 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0601 11:16:43.955605  270029 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0601 11:16:43.958385  270029 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Jun  1 10:20 /usr/share/ca-certificates/minikubeCA.pem
	I0601 11:16:43.958427  270029 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0601 11:16:43.962842  270029 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0601 11:16:43.969066  270029 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6708.pem && ln -fs /usr/share/ca-certificates/6708.pem /etc/ssl/certs/6708.pem"
	I0601 11:16:43.975850  270029 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6708.pem
	I0601 11:16:43.978786  270029 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Jun  1 10:24 /usr/share/ca-certificates/6708.pem
	I0601 11:16:43.978822  270029 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6708.pem
	I0601 11:16:43.983269  270029 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6708.pem /etc/ssl/certs/51391683.0"
	I0601 11:16:43.989455  270029 kubeadm.go:395] StartCluster: {Name:embed-certs-20220601110327-6708 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:embed-certs-20220601110327-6708 Namespace:default APIServerName
:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.5.1@sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledS
top:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0601 11:16:43.989553  270029 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0601 11:16:43.989584  270029 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0601 11:16:44.014119  270029 cri.go:87] found id: "f1db42bbd17faeafd4cd3be2854a922ebf474bf60a0d563922769dd22d828187"
	I0601 11:16:44.014147  270029 cri.go:87] found id: "4a8be0c7cfc5357e854ba7e1f8d7f90fdbacf08e294eb70b849c4501eb4b32b6"
	I0601 11:16:44.014155  270029 cri.go:87] found id: "d49ab0e8a34f4fc8f9cd1e7a8cba837be3548801379cde0d444aadf2a833b32a"
	I0601 11:16:44.014160  270029 cri.go:87] found id: "c32cb0a91408a09b3c11ff34a26cefeb792698bb4386b00d3bf469632b43a1d0"
	I0601 11:16:44.014169  270029 cri.go:87] found id: "a985029383eb2ce5944970221a3e1c9b33b522c0c97a0fb29c0b4c260cbf6a2f"
	I0601 11:16:44.014178  270029 cri.go:87] found id: "b8dd730d917c46f1998a84680d8f66c7fe1a671f92381a72b93c7e59652409f2"
	I0601 11:16:44.014195  270029 cri.go:87] found id: ""
	I0601 11:16:44.014231  270029 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0601 11:16:44.026017  270029 cri.go:114] JSON = null
	W0601 11:16:44.026068  270029 kubeadm.go:402] unpause failed: list paused: list returned 0 containers, but ps returned 6
	I0601 11:16:44.026121  270029 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0601 11:16:44.032599  270029 kubeadm.go:410] found existing configuration files, will attempt cluster restart
	I0601 11:16:44.032619  270029 kubeadm.go:626] restartCluster start
	I0601 11:16:44.032657  270029 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0601 11:16:44.038572  270029 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0601 11:16:44.039184  270029 kubeconfig.go:116] verify returned: extract IP: "embed-certs-20220601110327-6708" does not appear in /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig
	I0601 11:16:44.039521  270029 kubeconfig.go:127] "embed-certs-20220601110327-6708" context is missing from /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig - will repair!
	I0601 11:16:44.040098  270029 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig: {Name:mkb0e16236e54fbc8651999f1dd70854c53de7db Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0601 11:16:44.041394  270029 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0601 11:16:44.047555  270029 api_server.go:165] Checking apiserver status ...
	I0601 11:16:44.047587  270029 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 11:16:44.054922  270029 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 11:16:44.255283  270029 api_server.go:165] Checking apiserver status ...
	I0601 11:16:44.255367  270029 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 11:16:44.263875  270029 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 11:16:44.455148  270029 api_server.go:165] Checking apiserver status ...
	I0601 11:16:44.455218  270029 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 11:16:44.463550  270029 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 11:16:44.655853  270029 api_server.go:165] Checking apiserver status ...
	I0601 11:16:44.655952  270029 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 11:16:44.664417  270029 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 11:16:44.855542  270029 api_server.go:165] Checking apiserver status ...
	I0601 11:16:44.855598  270029 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 11:16:44.863960  270029 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 11:16:45.055126  270029 api_server.go:165] Checking apiserver status ...
	I0601 11:16:45.055211  270029 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 11:16:45.063480  270029 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 11:16:45.255826  270029 api_server.go:165] Checking apiserver status ...
	I0601 11:16:45.255924  270029 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 11:16:45.264353  270029 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 11:16:45.455654  270029 api_server.go:165] Checking apiserver status ...
	I0601 11:16:45.455728  270029 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 11:16:45.464072  270029 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 11:16:45.655400  270029 api_server.go:165] Checking apiserver status ...
	I0601 11:16:45.655474  270029 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 11:16:45.664018  270029 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 11:16:45.855135  270029 api_server.go:165] Checking apiserver status ...
	I0601 11:16:45.855220  270029 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 11:16:45.863919  270029 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 11:16:46.055139  270029 api_server.go:165] Checking apiserver status ...
	I0601 11:16:46.055234  270029 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 11:16:46.063984  270029 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 11:16:46.255234  270029 api_server.go:165] Checking apiserver status ...
	I0601 11:16:46.255309  270029 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 11:16:46.263465  270029 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 11:16:46.455752  270029 api_server.go:165] Checking apiserver status ...
	I0601 11:16:46.455834  270029 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 11:16:46.464271  270029 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 11:16:46.655553  270029 api_server.go:165] Checking apiserver status ...
	I0601 11:16:46.655615  270029 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 11:16:46.664388  270029 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 11:16:46.855579  270029 api_server.go:165] Checking apiserver status ...
	I0601 11:16:46.855653  270029 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 11:16:46.864130  270029 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 11:16:45.354907  254820 pod_ready.go:102] pod "coredns-5644d7b6d9-5z28m" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 10:59:44 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:16:47.355242  254820 pod_ready.go:102] pod "coredns-5644d7b6d9-5z28m" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 10:59:44 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:16:47.055676  270029 api_server.go:165] Checking apiserver status ...
	I0601 11:16:47.055754  270029 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 11:16:47.064444  270029 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 11:16:47.064468  270029 api_server.go:165] Checking apiserver status ...
	I0601 11:16:47.064499  270029 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 11:16:47.072080  270029 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 11:16:47.072108  270029 kubeadm.go:601] needs reconfigure: apiserver error: timed out waiting for the condition
	I0601 11:16:47.072115  270029 kubeadm.go:1092] stopping kube-system containers ...
	I0601 11:16:47.072127  270029 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I0601 11:16:47.072169  270029 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0601 11:16:47.097085  270029 cri.go:87] found id: "f1db42bbd17faeafd4cd3be2854a922ebf474bf60a0d563922769dd22d828187"
	I0601 11:16:47.097117  270029 cri.go:87] found id: "4a8be0c7cfc5357e854ba7e1f8d7f90fdbacf08e294eb70b849c4501eb4b32b6"
	I0601 11:16:47.097128  270029 cri.go:87] found id: "d49ab0e8a34f4fc8f9cd1e7a8cba837be3548801379cde0d444aadf2a833b32a"
	I0601 11:16:47.097138  270029 cri.go:87] found id: "c32cb0a91408a09b3c11ff34a26cefeb792698bb4386b00d3bf469632b43a1d0"
	I0601 11:16:47.097146  270029 cri.go:87] found id: "a985029383eb2ce5944970221a3e1c9b33b522c0c97a0fb29c0b4c260cbf6a2f"
	I0601 11:16:47.097156  270029 cri.go:87] found id: "b8dd730d917c46f1998a84680d8f66c7fe1a671f92381a72b93c7e59652409f2"
	I0601 11:16:47.097162  270029 cri.go:87] found id: ""
	I0601 11:16:47.097167  270029 cri.go:232] Stopping containers: [f1db42bbd17faeafd4cd3be2854a922ebf474bf60a0d563922769dd22d828187 4a8be0c7cfc5357e854ba7e1f8d7f90fdbacf08e294eb70b849c4501eb4b32b6 d49ab0e8a34f4fc8f9cd1e7a8cba837be3548801379cde0d444aadf2a833b32a c32cb0a91408a09b3c11ff34a26cefeb792698bb4386b00d3bf469632b43a1d0 a985029383eb2ce5944970221a3e1c9b33b522c0c97a0fb29c0b4c260cbf6a2f b8dd730d917c46f1998a84680d8f66c7fe1a671f92381a72b93c7e59652409f2]
	I0601 11:16:47.097217  270029 ssh_runner.go:195] Run: which crictl
	I0601 11:16:47.099999  270029 ssh_runner.go:195] Run: sudo /usr/bin/crictl stop f1db42bbd17faeafd4cd3be2854a922ebf474bf60a0d563922769dd22d828187 4a8be0c7cfc5357e854ba7e1f8d7f90fdbacf08e294eb70b849c4501eb4b32b6 d49ab0e8a34f4fc8f9cd1e7a8cba837be3548801379cde0d444aadf2a833b32a c32cb0a91408a09b3c11ff34a26cefeb792698bb4386b00d3bf469632b43a1d0 a985029383eb2ce5944970221a3e1c9b33b522c0c97a0fb29c0b4c260cbf6a2f b8dd730d917c46f1998a84680d8f66c7fe1a671f92381a72b93c7e59652409f2
	I0601 11:16:47.124540  270029 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0601 11:16:47.134618  270029 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0601 11:16:47.141742  270029 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5643 Jun  1 11:03 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5656 Jun  1 11:03 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2063 Jun  1 11:03 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5604 Jun  1 11:03 /etc/kubernetes/scheduler.conf
	
	I0601 11:16:47.141795  270029 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0601 11:16:47.148369  270029 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0601 11:16:47.154571  270029 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0601 11:16:47.160776  270029 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0601 11:16:47.160822  270029 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0601 11:16:47.166675  270029 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0601 11:16:47.172938  270029 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0601 11:16:47.172978  270029 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0601 11:16:47.179087  270029 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0601 11:16:47.185727  270029 kubeadm.go:703] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0601 11:16:47.185749  270029 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0601 11:16:47.228261  270029 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0601 11:16:48.197494  270029 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0601 11:16:48.329624  270029 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0601 11:16:48.378681  270029 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0601 11:16:48.420684  270029 api_server.go:51] waiting for apiserver process to appear ...
	I0601 11:16:48.420732  270029 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:16:48.929035  270029 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:16:49.428979  270029 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:16:49.928976  270029 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:16:50.428888  270029 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:16:50.928698  270029 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:16:51.428664  270029 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:16:51.929701  270029 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:16:49.355631  254820 pod_ready.go:102] pod "coredns-5644d7b6d9-5z28m" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 10:59:44 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:16:51.854986  254820 pod_ready.go:102] pod "coredns-5644d7b6d9-5z28m" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 10:59:44 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:16:52.429050  270029 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:16:52.928894  270029 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:16:53.429111  270029 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:16:53.929528  270029 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:16:54.429038  270029 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:16:54.463645  270029 api_server.go:71] duration metric: took 6.042967785s to wait for apiserver process to appear ...
	I0601 11:16:54.463674  270029 api_server.go:87] waiting for apiserver healthz status ...
	I0601 11:16:54.463686  270029 api_server.go:240] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0601 11:16:54.464059  270029 api_server.go:256] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0601 11:16:54.964315  270029 api_server.go:240] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0601 11:16:53.855517  254820 pod_ready.go:102] pod "coredns-5644d7b6d9-5z28m" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 10:59:44 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:16:56.355928  254820 pod_ready.go:102] pod "coredns-5644d7b6d9-5z28m" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 10:59:44 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:16:57.340901  270029 api_server.go:266] https://192.168.76.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0601 11:16:57.340932  270029 api_server.go:102] status: https://192.168.76.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0601 11:16:57.464200  270029 api_server.go:240] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0601 11:16:57.470124  270029 api_server.go:266] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0601 11:16:57.470161  270029 api_server.go:102] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0601 11:16:57.964628  270029 api_server.go:240] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0601 11:16:57.969079  270029 api_server.go:266] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0601 11:16:57.969109  270029 api_server.go:102] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0601 11:16:58.464413  270029 api_server.go:240] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0601 11:16:58.469280  270029 api_server.go:266] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0601 11:16:58.469323  270029 api_server.go:102] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0601 11:16:58.964873  270029 api_server.go:240] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0601 11:16:58.969629  270029 api_server.go:266] https://192.168.76.2:8443/healthz returned 200:
	ok
	I0601 11:16:58.976323  270029 api_server.go:140] control plane version: v1.23.6
	I0601 11:16:58.976349  270029 api_server.go:130] duration metric: took 4.512668885s to wait for apiserver health ...
	I0601 11:16:58.976362  270029 cni.go:95] Creating CNI manager for ""
	I0601 11:16:58.976370  270029 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0601 11:16:58.978490  270029 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0601 11:16:58.979893  270029 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0601 11:16:58.983633  270029 cni.go:189] applying CNI manifest using /var/lib/minikube/binaries/v1.23.6/kubectl ...
	I0601 11:16:58.983655  270029 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0601 11:16:58.996686  270029 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0601 11:16:59.594447  270029 system_pods.go:43] waiting for kube-system pods to appear ...
	I0601 11:16:59.601657  270029 system_pods.go:59] 9 kube-system pods found
	I0601 11:16:59.601692  270029 system_pods.go:61] "coredns-64897985d-9dpfv" [2fd986d2-2806-41d0-b75f-04a9f5883420] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
	I0601 11:16:59.601699  270029 system_pods.go:61] "etcd-embed-certs-20220601110327-6708" [696f91cd-2833-44cc-80cb-7cff571b5b35] Running
	I0601 11:16:59.601709  270029 system_pods.go:61] "kindnet-92tfl" [1e2e52a8-4f89-49af-9741-f79384628a29] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0601 11:16:59.601719  270029 system_pods.go:61] "kube-apiserver-embed-certs-20220601110327-6708" [a1b6d250-97ce-4261-983a-a43004795368] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0601 11:16:59.601741  270029 system_pods.go:61] "kube-controller-manager-embed-certs-20220601110327-6708" [2f9b6898-a046-4ff4-8a25-f38e0bfc8ebd] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0601 11:16:59.601766  270029 system_pods.go:61] "kube-proxy-99lsz" [c2f232c6-4807-4bcf-a1ca-c39489a0112a] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0601 11:16:59.601778  270029 system_pods.go:61] "kube-scheduler-embed-certs-20220601110327-6708" [846abe25-58d2-4c73-8fb2-bd8f7d4cd289] Running
	I0601 11:16:59.601786  270029 system_pods.go:61] "metrics-server-b955d9d8-c4kht" [b1221545-5b1f-4fd0-9d91-732fae262566] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
	I0601 11:16:59.601813  270029 system_pods.go:61] "storage-provisioner" [8d62c4a6-0f6f-4855-adc3-3347614c0287] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
	I0601 11:16:59.601825  270029 system_pods.go:74] duration metric: took 7.351583ms to wait for pod list to return data ...
	I0601 11:16:59.601839  270029 node_conditions.go:102] verifying NodePressure condition ...
	I0601 11:16:59.604272  270029 node_conditions.go:122] node storage ephemeral capacity is 304695084Ki
	I0601 11:16:59.604298  270029 node_conditions.go:123] node cpu capacity is 8
	I0601 11:16:59.604311  270029 node_conditions.go:105] duration metric: took 2.462157ms to run NodePressure ...
	I0601 11:16:59.604330  270029 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0601 11:16:59.726966  270029 kubeadm.go:762] waiting for restarted kubelet to initialise ...
	I0601 11:16:59.731041  270029 kubeadm.go:777] kubelet initialised
	I0601 11:16:59.731062  270029 kubeadm.go:778] duration metric: took 4.07535ms waiting for restarted kubelet to initialise ...
	I0601 11:16:59.731070  270029 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0601 11:16:59.737745  270029 pod_ready.go:78] waiting up to 4m0s for pod "coredns-64897985d-9dpfv" in "kube-system" namespace to be "Ready" ...
	I0601 11:17:01.743720  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:16:58.855967  254820 pod_ready.go:102] pod "coredns-5644d7b6d9-5z28m" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 10:59:44 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:17:01.355221  254820 pod_ready.go:102] pod "coredns-5644d7b6d9-5z28m" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 10:59:44 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:17:03.356805  254820 pod_ready.go:102] pod "coredns-5644d7b6d9-5z28m" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 10:59:44 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:17:04.243101  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:17:06.743027  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:17:05.855005  254820 pod_ready.go:102] pod "coredns-5644d7b6d9-5z28m" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 10:59:44 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:17:07.855118  254820 pod_ready.go:102] pod "coredns-5644d7b6d9-5z28m" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 10:59:44 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:17:09.243031  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:17:11.742744  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:17:09.855246  254820 pod_ready.go:102] pod "coredns-5644d7b6d9-5z28m" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 10:59:44 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:17:11.855357  254820 pod_ready.go:102] pod "coredns-5644d7b6d9-5z28m" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 10:59:44 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:17:13.743254  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:17:16.242930  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:17:14.355967  254820 pod_ready.go:102] pod "coredns-5644d7b6d9-5z28m" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 10:59:44 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:17:16.855640  254820 pod_ready.go:102] pod "coredns-5644d7b6d9-5z28m" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 10:59:44 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:17:19.355251  254820 pod_ready.go:102] pod "coredns-5644d7b6d9-5z28m" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 10:59:44 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:17:20.353024  254820 pod_ready.go:81] duration metric: took 4m0.003317239s waiting for pod "coredns-5644d7b6d9-5z28m" in "kube-system" namespace to be "Ready" ...
	E0601 11:17:20.353048  254820 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "coredns-5644d7b6d9-5z28m" in "kube-system" namespace to be "Ready" (will not retry!)
	I0601 11:17:20.353067  254820 pod_ready.go:38] duration metric: took 4m0.008046261s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0601 11:17:20.353090  254820 kubeadm.go:630] restartCluster took 5m9.726790355s
	W0601 11:17:20.353201  254820 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0601 11:17:20.353229  254820 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I0601 11:17:21.540348  254820 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force": (1.18709622s)
	I0601 11:17:21.540403  254820 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0601 11:17:21.550073  254820 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0601 11:17:21.557225  254820 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0601 11:17:21.557279  254820 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0601 11:17:21.564483  254820 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0601 11:17:21.564542  254820 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0601 11:17:18.243905  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:17:20.743239  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:17:21.918912  254820 out.go:204]   - Generating certificates and keys ...
	I0601 11:17:22.738748  254820 out.go:204]   - Booting up control plane ...
	I0601 11:17:23.242797  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:17:25.244019  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:17:27.743633  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:17:30.243041  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:17:31.781643  254820 out.go:204]   - Configuring RBAC rules ...
	I0601 11:17:32.197904  254820 cni.go:95] Creating CNI manager for ""
	I0601 11:17:32.197928  254820 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0601 11:17:32.199768  254820 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0601 11:17:32.201271  254820 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0601 11:17:32.204901  254820 cni.go:189] applying CNI manifest using /var/lib/minikube/binaries/v1.16.0/kubectl ...
	I0601 11:17:32.204927  254820 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0601 11:17:32.218979  254820 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0601 11:17:32.428375  254820 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0601 11:17:32.428491  254820 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:17:32.428492  254820 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl label nodes minikube.k8s.io/version=v1.26.0-beta.1 minikube.k8s.io/commit=4a356b2b7b41c6be3e1e342298908c27bb98ce92 minikube.k8s.io/name=old-k8s-version-20220601105850-6708 minikube.k8s.io/updated_at=2022_06_01T11_17_32_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:17:32.435199  254820 ops.go:34] apiserver oom_adj: -16
	I0601 11:17:32.502431  254820 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:17:33.111963  254820 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:17:33.611972  254820 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:17:32.243074  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:17:34.742805  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:17:36.743017  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:17:34.111941  254820 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:17:34.612095  254820 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:17:35.112547  254820 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:17:35.612182  254820 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:17:36.111959  254820 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:17:36.612404  254820 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:17:37.111711  254820 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:17:37.612424  254820 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:17:38.112268  254820 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:17:38.612434  254820 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:17:38.743858  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:17:41.242806  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:17:39.112426  254820 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:17:39.612518  254820 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:17:40.111917  254820 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:17:40.612600  254820 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:17:41.112173  254820 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:17:41.611972  254820 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:17:42.112578  254820 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:17:42.611825  254820 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:17:43.111943  254820 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:17:43.611988  254820 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:17:43.243005  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:17:45.742725  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:17:44.111909  254820 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:17:44.612461  254820 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:17:45.111634  254820 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:17:45.611836  254820 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:17:46.111993  254820 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:17:46.612464  254820 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:17:47.112043  254820 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:17:47.179274  254820 kubeadm.go:1045] duration metric: took 14.750830236s to wait for elevateKubeSystemPrivileges.
	I0601 11:17:47.179303  254820 kubeadm.go:397] StartCluster complete in 5m36.59748449s
	I0601 11:17:47.179319  254820 settings.go:142] acquiring lock: {Name:mk20a847233fa50399ab0a24280bffb8d8dbd41b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0601 11:17:47.179406  254820 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig
	I0601 11:17:47.180983  254820 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig: {Name:mkb0e16236e54fbc8651999f1dd70854c53de7db Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0601 11:17:47.695922  254820 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "old-k8s-version-20220601105850-6708" rescaled to 1
	I0601 11:17:47.695995  254820 start.go:208] Will wait 6m0s for node &{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0601 11:17:47.699321  254820 out.go:177] * Verifying Kubernetes components...
	I0601 11:17:47.696036  254820 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0601 11:17:47.696052  254820 addons.go:415] enableAddons start: toEnable=map[dashboard:true metrics-server:true], additional=[]
	I0601 11:17:47.696246  254820 config.go:178] Loaded profile config "old-k8s-version-20220601105850-6708": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.16.0
	I0601 11:17:47.700668  254820 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0601 11:17:47.700702  254820 addons.go:65] Setting storage-provisioner=true in profile "old-k8s-version-20220601105850-6708"
	I0601 11:17:47.700714  254820 addons.go:65] Setting metrics-server=true in profile "old-k8s-version-20220601105850-6708"
	I0601 11:17:47.700720  254820 addons.go:65] Setting dashboard=true in profile "old-k8s-version-20220601105850-6708"
	I0601 11:17:47.700726  254820 addons.go:153] Setting addon storage-provisioner=true in "old-k8s-version-20220601105850-6708"
	I0601 11:17:47.700729  254820 addons.go:153] Setting addon metrics-server=true in "old-k8s-version-20220601105850-6708"
	I0601 11:17:47.700730  254820 addons.go:153] Setting addon dashboard=true in "old-k8s-version-20220601105850-6708"
	W0601 11:17:47.700732  254820 addons.go:165] addon storage-provisioner should already be in state true
	W0601 11:17:47.700739  254820 addons.go:165] addon dashboard should already be in state true
	W0601 11:17:47.700738  254820 addons.go:165] addon metrics-server should already be in state true
	I0601 11:17:47.700777  254820 host.go:66] Checking if "old-k8s-version-20220601105850-6708" exists ...
	I0601 11:17:47.700784  254820 host.go:66] Checking if "old-k8s-version-20220601105850-6708" exists ...
	I0601 11:17:47.700790  254820 host.go:66] Checking if "old-k8s-version-20220601105850-6708" exists ...
	I0601 11:17:47.700709  254820 addons.go:65] Setting default-storageclass=true in profile "old-k8s-version-20220601105850-6708"
	I0601 11:17:47.700824  254820 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-20220601105850-6708"
	I0601 11:17:47.701173  254820 cli_runner.go:164] Run: docker container inspect old-k8s-version-20220601105850-6708 --format={{.State.Status}}
	I0601 11:17:47.701259  254820 cli_runner.go:164] Run: docker container inspect old-k8s-version-20220601105850-6708 --format={{.State.Status}}
	I0601 11:17:47.701279  254820 cli_runner.go:164] Run: docker container inspect old-k8s-version-20220601105850-6708 --format={{.State.Status}}
	I0601 11:17:47.701285  254820 cli_runner.go:164] Run: docker container inspect old-k8s-version-20220601105850-6708 --format={{.State.Status}}
	I0601 11:17:47.713330  254820 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-20220601105850-6708" to be "Ready" ...
	I0601 11:17:47.750554  254820 addons.go:153] Setting addon default-storageclass=true in "old-k8s-version-20220601105850-6708"
	W0601 11:17:47.750578  254820 addons.go:165] addon default-storageclass should already be in state true
	I0601 11:17:47.750614  254820 host.go:66] Checking if "old-k8s-version-20220601105850-6708" exists ...
	I0601 11:17:47.754763  254820 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0601 11:17:47.750975  254820 cli_runner.go:164] Run: docker container inspect old-k8s-version-20220601105850-6708 --format={{.State.Status}}
	I0601 11:17:47.758252  254820 out.go:177]   - Using image k8s.gcr.io/echoserver:1.4
	I0601 11:17:47.757064  254820 addons.go:348] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0601 11:17:47.758285  254820 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0601 11:17:47.758359  254820 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220601105850-6708
	I0601 11:17:47.759985  254820 out.go:177]   - Using image kubernetesui/dashboard:v2.5.1
	I0601 11:17:47.761242  254820 addons.go:348] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0601 11:17:47.761281  254820 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0601 11:17:47.761332  254820 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220601105850-6708
	I0601 11:17:47.764485  254820 out.go:177]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I0601 11:17:47.765882  254820 addons.go:348] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0601 11:17:47.765902  254820 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0601 11:17:47.765947  254820 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220601105850-6708
	I0601 11:17:47.809745  254820 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.58.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0601 11:17:47.813277  254820 addons.go:348] installing /etc/kubernetes/addons/storageclass.yaml
	I0601 11:17:47.813299  254820 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0601 11:17:47.813350  254820 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220601105850-6708
	I0601 11:17:47.814249  254820 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49422 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/old-k8s-version-20220601105850-6708/id_rsa Username:docker}
	I0601 11:17:47.830337  254820 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49422 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/old-k8s-version-20220601105850-6708/id_rsa Username:docker}
	I0601 11:17:47.832848  254820 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49422 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/old-k8s-version-20220601105850-6708/id_rsa Username:docker}
	I0601 11:17:47.854014  254820 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49422 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/old-k8s-version-20220601105850-6708/id_rsa Username:docker}
	I0601 11:17:47.968702  254820 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0601 11:17:47.969398  254820 addons.go:348] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0601 11:17:47.969416  254820 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1849 bytes)
	I0601 11:17:47.969640  254820 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0601 11:17:47.970189  254820 addons.go:348] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0601 11:17:47.970208  254820 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0601 11:17:48.055776  254820 addons.go:348] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0601 11:17:48.055807  254820 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0601 11:17:48.057898  254820 addons.go:348] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0601 11:17:48.057918  254820 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0601 11:17:48.072675  254820 addons.go:348] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0601 11:17:48.072702  254820 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0601 11:17:48.072788  254820 addons.go:348] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0601 11:17:48.072805  254820 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0601 11:17:48.154620  254820 addons.go:348] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0601 11:17:48.154657  254820 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4196 bytes)
	I0601 11:17:48.158011  254820 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0601 11:17:48.181023  254820 addons.go:348] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0601 11:17:48.181054  254820 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0601 11:17:48.271474  254820 addons.go:348] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0601 11:17:48.271501  254820 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0601 11:17:48.275017  254820 start.go:806] {"host.minikube.internal": 192.168.58.1} host record injected into CoreDNS
	I0601 11:17:48.360863  254820 addons.go:348] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0601 11:17:48.360891  254820 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0601 11:17:48.376549  254820 addons.go:348] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0601 11:17:48.376580  254820 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0601 11:17:48.392167  254820 addons.go:348] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0601 11:17:48.392196  254820 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0601 11:17:48.464169  254820 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0601 11:17:48.882430  254820 addons.go:386] Verifying addon metrics-server=true in "old-k8s-version-20220601105850-6708"
	I0601 11:17:49.264813  254820 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server, dashboard
	I0601 11:17:47.745681  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:17:50.242914  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:17:49.266465  254820 addons.go:417] enableAddons completed in 1.57041232s
	I0601 11:17:49.718664  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:17:51.719973  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:17:52.742723  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:17:54.742788  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:17:56.742912  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:17:54.219149  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:17:56.719562  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:17:58.743796  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:18:01.242721  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:17:59.218935  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:18:01.719652  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:18:03.243620  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:18:05.742777  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:18:04.219204  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:18:06.719494  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:18:07.742900  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:18:09.743030  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:18:09.219176  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:18:11.718880  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:18:13.719312  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:18:12.242806  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:18:14.243041  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:18:16.742612  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:18:15.719670  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:18:18.219172  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:18:18.742966  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:18:21.243088  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:18:20.219521  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:18:22.719227  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:18:23.245196  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:18:25.742790  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:18:24.719365  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:18:26.719411  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:18:27.743212  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:18:30.243627  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:18:29.218801  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:18:31.219603  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:18:33.719821  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:18:32.743319  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:18:35.242980  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:18:36.219334  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:18:38.219629  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:18:37.243134  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:18:39.742862  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:18:40.219897  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:18:42.719206  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:18:42.242887  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:18:44.243121  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:18:46.742692  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:18:44.719361  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:18:46.719965  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:18:48.742793  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:18:51.243730  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:18:49.219161  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:18:51.719823  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:18:53.742610  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:18:55.742817  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:18:54.219442  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:18:56.719307  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:18:57.742887  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:19:00.244895  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:18:59.218862  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:19:01.219115  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:19:03.219470  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:19:02.743210  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:19:05.242775  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:19:05.719920  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:19:08.219261  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:19:07.243536  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:19:09.743457  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:19:11.743691  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:19:10.719799  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:19:13.219313  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:19:13.743775  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:19:16.242793  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:19:15.220539  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:19:17.719072  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:19:18.243014  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:19:20.742913  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:19:19.719157  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:19:22.219444  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:19:22.743021  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:19:24.743212  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:19:24.718931  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:19:26.719409  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:19:28.719822  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:19:27.243432  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:19:29.743172  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:19:31.219776  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:19:33.719660  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:19:32.242892  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:19:34.242952  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:19:36.742808  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID
	fec5fcb4aee4a       6de166512aa22       23 seconds ago      Exited              kindnet-cni               7                   65b8c60551ae4
	313035e9674ff       4c03754524064       12 minutes ago      Running             kube-proxy                0                   c6ff76a6b51bf
	f9746f111b56a       8fa62c12256df       12 minutes ago      Running             kube-apiserver            0                   9e938dc1f669a
	0b15aeee4f551       595f327f224a4       12 minutes ago      Running             kube-scheduler            0                   1fa00271568ab
	627fd5c08820c       df7b72818ad2e       12 minutes ago      Running             kube-controller-manager   0                   a871ea5dc3032
	6ce85ae821e03       25f8c7f3da61c       12 minutes ago      Running             etcd                      0                   73e15160f8342
	
	* 
	* ==> containerd <==
	* -- Logs begin at Wed 2022-06-01 11:07:03 UTC, end at Wed 2022-06-01 11:19:38 UTC. --
	Jun 01 11:11:21 default-k8s-different-port-20220601110654-6708 containerd[516]: time="2022-06-01T11:11:21.984096913Z" level=warning msg="cleaning up after shim disconnected" id=783ef41102cfd7ad4ba6d335d930063b2d735fbe8ff6d9b435ff65af1cc658e2 namespace=k8s.io
	Jun 01 11:11:21 default-k8s-different-port-20220601110654-6708 containerd[516]: time="2022-06-01T11:11:21.984110739Z" level=info msg="cleaning up dead shim"
	Jun 01 11:11:21 default-k8s-different-port-20220601110654-6708 containerd[516]: time="2022-06-01T11:11:21.993468203Z" level=warning msg="cleanup warnings time=\"2022-06-01T11:11:21Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2449 runtime=io.containerd.runc.v2\n"
	Jun 01 11:11:22 default-k8s-different-port-20220601110654-6708 containerd[516]: time="2022-06-01T11:11:22.095149153Z" level=info msg="RemoveContainer for \"8af1e71b79f5c05909f7b5ac1a7551901a085d87a5e5ee09971994295cb2b2c9\""
	Jun 01 11:11:22 default-k8s-different-port-20220601110654-6708 containerd[516]: time="2022-06-01T11:11:22.099371016Z" level=info msg="RemoveContainer for \"8af1e71b79f5c05909f7b5ac1a7551901a085d87a5e5ee09971994295cb2b2c9\" returns successfully"
	Jun 01 11:14:02 default-k8s-different-port-20220601110654-6708 containerd[516]: time="2022-06-01T11:14:02.586772350Z" level=info msg="CreateContainer within sandbox \"65b8c60551ae491626460bc8b42f164144cfeb7dea5063c8082b526389027897\" for container &ContainerMetadata{Name:kindnet-cni,Attempt:6,}"
	Jun 01 11:14:02 default-k8s-different-port-20220601110654-6708 containerd[516]: time="2022-06-01T11:14:02.598817303Z" level=info msg="CreateContainer within sandbox \"65b8c60551ae491626460bc8b42f164144cfeb7dea5063c8082b526389027897\" for &ContainerMetadata{Name:kindnet-cni,Attempt:6,} returns container id \"937c5d1645be40fa7ab5a0a0f9569191312b4b62cad550f4d1269339e22ade76\""
	Jun 01 11:14:02 default-k8s-different-port-20220601110654-6708 containerd[516]: time="2022-06-01T11:14:02.599241470Z" level=info msg="StartContainer for \"937c5d1645be40fa7ab5a0a0f9569191312b4b62cad550f4d1269339e22ade76\""
	Jun 01 11:14:02 default-k8s-different-port-20220601110654-6708 containerd[516]: time="2022-06-01T11:14:02.668660041Z" level=info msg="StartContainer for \"937c5d1645be40fa7ab5a0a0f9569191312b4b62cad550f4d1269339e22ade76\" returns successfully"
	Jun 01 11:14:12 default-k8s-different-port-20220601110654-6708 containerd[516]: time="2022-06-01T11:14:12.893305963Z" level=info msg="shim disconnected" id=937c5d1645be40fa7ab5a0a0f9569191312b4b62cad550f4d1269339e22ade76
	Jun 01 11:14:12 default-k8s-different-port-20220601110654-6708 containerd[516]: time="2022-06-01T11:14:12.893373288Z" level=warning msg="cleaning up after shim disconnected" id=937c5d1645be40fa7ab5a0a0f9569191312b4b62cad550f4d1269339e22ade76 namespace=k8s.io
	Jun 01 11:14:12 default-k8s-different-port-20220601110654-6708 containerd[516]: time="2022-06-01T11:14:12.893386248Z" level=info msg="cleaning up dead shim"
	Jun 01 11:14:12 default-k8s-different-port-20220601110654-6708 containerd[516]: time="2022-06-01T11:14:12.902625641Z" level=warning msg="cleanup warnings time=\"2022-06-01T11:14:12Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2779 runtime=io.containerd.runc.v2\n"
	Jun 01 11:14:13 default-k8s-different-port-20220601110654-6708 containerd[516]: time="2022-06-01T11:14:13.400464046Z" level=info msg="RemoveContainer for \"783ef41102cfd7ad4ba6d335d930063b2d735fbe8ff6d9b435ff65af1cc658e2\""
	Jun 01 11:14:13 default-k8s-different-port-20220601110654-6708 containerd[516]: time="2022-06-01T11:14:13.405029474Z" level=info msg="RemoveContainer for \"783ef41102cfd7ad4ba6d335d930063b2d735fbe8ff6d9b435ff65af1cc658e2\" returns successfully"
	Jun 01 11:19:15 default-k8s-different-port-20220601110654-6708 containerd[516]: time="2022-06-01T11:19:15.586562484Z" level=info msg="CreateContainer within sandbox \"65b8c60551ae491626460bc8b42f164144cfeb7dea5063c8082b526389027897\" for container &ContainerMetadata{Name:kindnet-cni,Attempt:7,}"
	Jun 01 11:19:15 default-k8s-different-port-20220601110654-6708 containerd[516]: time="2022-06-01T11:19:15.599083872Z" level=info msg="CreateContainer within sandbox \"65b8c60551ae491626460bc8b42f164144cfeb7dea5063c8082b526389027897\" for &ContainerMetadata{Name:kindnet-cni,Attempt:7,} returns container id \"fec5fcb4aee4a111d94d448605edbdddd36bb33e1c2c06fad87be11f5157efdd\""
	Jun 01 11:19:15 default-k8s-different-port-20220601110654-6708 containerd[516]: time="2022-06-01T11:19:15.599621933Z" level=info msg="StartContainer for \"fec5fcb4aee4a111d94d448605edbdddd36bb33e1c2c06fad87be11f5157efdd\""
	Jun 01 11:19:15 default-k8s-different-port-20220601110654-6708 containerd[516]: time="2022-06-01T11:19:15.757658411Z" level=info msg="StartContainer for \"fec5fcb4aee4a111d94d448605edbdddd36bb33e1c2c06fad87be11f5157efdd\" returns successfully"
	Jun 01 11:19:25 default-k8s-different-port-20220601110654-6708 containerd[516]: time="2022-06-01T11:19:25.983536573Z" level=info msg="shim disconnected" id=fec5fcb4aee4a111d94d448605edbdddd36bb33e1c2c06fad87be11f5157efdd
	Jun 01 11:19:25 default-k8s-different-port-20220601110654-6708 containerd[516]: time="2022-06-01T11:19:25.983603412Z" level=warning msg="cleaning up after shim disconnected" id=fec5fcb4aee4a111d94d448605edbdddd36bb33e1c2c06fad87be11f5157efdd namespace=k8s.io
	Jun 01 11:19:25 default-k8s-different-port-20220601110654-6708 containerd[516]: time="2022-06-01T11:19:25.983617053Z" level=info msg="cleaning up dead shim"
	Jun 01 11:19:25 default-k8s-different-port-20220601110654-6708 containerd[516]: time="2022-06-01T11:19:25.992675415Z" level=warning msg="cleanup warnings time=\"2022-06-01T11:19:25Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2882 runtime=io.containerd.runc.v2\n"
	Jun 01 11:19:26 default-k8s-different-port-20220601110654-6708 containerd[516]: time="2022-06-01T11:19:26.564844435Z" level=info msg="RemoveContainer for \"937c5d1645be40fa7ab5a0a0f9569191312b4b62cad550f4d1269339e22ade76\""
	Jun 01 11:19:26 default-k8s-different-port-20220601110654-6708 containerd[516]: time="2022-06-01T11:19:26.568945671Z" level=info msg="RemoveContainer for \"937c5d1645be40fa7ab5a0a0f9569191312b4b62cad550f4d1269339e22ade76\" returns successfully"
	
	* 
	* ==> describe nodes <==
	* Name:               default-k8s-different-port-20220601110654-6708
	Roles:              control-plane,master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-different-port-20220601110654-6708
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=4a356b2b7b41c6be3e1e342298908c27bb98ce92
	                    minikube.k8s.io/name=default-k8s-different-port-20220601110654-6708
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2022_06_01T11_07_22_0700
	                    minikube.k8s.io/version=v1.26.0-beta.1
	                    node-role.kubernetes.io/control-plane=
	                    node-role.kubernetes.io/master=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 01 Jun 2022 11:07:18 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-different-port-20220601110654-6708
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 01 Jun 2022 11:19:29 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 01 Jun 2022 11:17:49 +0000   Wed, 01 Jun 2022 11:07:16 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 01 Jun 2022 11:17:49 +0000   Wed, 01 Jun 2022 11:07:16 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 01 Jun 2022 11:17:49 +0000   Wed, 01 Jun 2022 11:07:16 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Wed, 01 Jun 2022 11:17:49 +0000   Wed, 01 Jun 2022 11:07:16 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    default-k8s-different-port-20220601110654-6708
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304695084Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32873824Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304695084Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32873824Ki
	  pods:               110
	System Info:
	  Machine ID:                 e0d7477b601740b2a7c32c13851e505c
	  System UUID:                c3073178-0849-48bb-88da-ba72ab8c4ba0
	  Boot ID:                    6ec46557-2d76-4d83-8353-3ee04fd961c4
	  Kernel Version:             5.13.0-1027-gcp
	  OS Image:                   Ubuntu 20.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.6.4
	  Kubelet Version:            v1.23.6
	  Kube-Proxy Version:         v1.23.6
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                                                      CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                                      ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-default-k8s-different-port-20220601110654-6708                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (0%!)(MISSING)       0 (0%!)(MISSING)         12m
	  kube-system                 kindnet-7fspq                                                             100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      12m
	  kube-system                 kube-apiserver-default-k8s-different-port-20220601110654-6708             250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-controller-manager-default-k8s-different-port-20220601110654-6708    200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-proxy-slzcl                                                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-scheduler-default-k8s-different-port-20220601110654-6708             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%!)(MISSING)   100m (1%!)(MISSING)
	  memory             150Mi (0%!)(MISSING)  50Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From        Message
	  ----    ------                   ----  ----        -------
	  Normal  Starting                 12m   kube-proxy  
	  Normal  Starting                 12m   kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  12m   kubelet     Node default-k8s-different-port-20220601110654-6708 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12m   kubelet     Node default-k8s-different-port-20220601110654-6708 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12m   kubelet     Node default-k8s-different-port-20220601110654-6708 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  12m   kubelet     Updated Node Allocatable limit across pods
	
	* 
	* ==> dmesg <==
	* [Jun 1 10:57] IPv4: martian source 10.244.0.2 from 10.244.0.2, on dev bridge
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff ce 23 50 3b c9 fd 08 06
	[  +0.595787] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ce 23 50 3b c9 fd 08 06
	[  +0.586201] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 66 8a d2 ee 2a 9a 08 06
	[Jun 1 10:58] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 9a c7 83 3b 8c 1d 08 06
	[  +0.000302] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 66 8a d2 ee 2a 9a 08 06
	[ +12.725641] IPv4: martian source 10.244.0.2 from 10.244.0.2, on dev bridge
	[  +0.000013] ll header: 00000000: ff ff ff ff ff ff 3a 82 da d0 8c 8d 08 06
	[  +0.906380] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 3a 82 da d0 8c 8d 08 06
	[  +0.302806] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff aa d1 05 1a ef 66 08 06
	[ +13.606547] IPv4: martian source 10.85.0.2 from 10.85.0.2, on dev cni0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 7e ad f3 21 27 5b 08 06
	[  +8.626375] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff f6 c1 ef be 01 dd 08 06
	[  +0.000348] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff aa d1 05 1a ef 66 08 06
	[  +8.278909] IPv4: martian source 10.85.0.2 from 10.85.0.2, on dev cni0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 72 fb cd eb 09 11 08 06
	[Jun 1 11:01] IPv4: martian destination 127.0.0.11 from 10.244.0.2, dev vethb04b9442
	
	* 
	* ==> etcd [6ce85ae821e03fdd8bd07541eda8ea822ee62a21dc41c68bec58c5695d43fb44] <==
	* {"level":"info","ts":"2022-06-01T11:07:16.174Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 1"}
	{"level":"info","ts":"2022-06-01T11:07:16.175Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 1"}
	{"level":"info","ts":"2022-06-01T11:07:16.175Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 1"}
	{"level":"info","ts":"2022-06-01T11:07:16.175Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 2"}
	{"level":"info","ts":"2022-06-01T11:07:16.175Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2022-06-01T11:07:16.175Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 2"}
	{"level":"info","ts":"2022-06-01T11:07:16.175Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"}
	{"level":"info","ts":"2022-06-01T11:07:16.175Z","caller":"etcdserver/server.go:2476","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2022-06-01T11:07:16.176Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2022-06-01T11:07:16.176Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2022-06-01T11:07:16.176Z","caller":"etcdserver/server.go:2500","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2022-06-01T11:07:16.176Z","caller":"etcdserver/server.go:2027","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:default-k8s-different-port-20220601110654-6708 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2022-06-01T11:07:16.176Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-06-01T11:07:16.176Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-06-01T11:07:16.177Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2022-06-01T11:07:16.178Z","caller":"etcdmain/main.go:47","msg":"notifying init daemon"}
	{"level":"info","ts":"2022-06-01T11:07:16.178Z","caller":"etcdmain/main.go:53","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2022-06-01T11:07:16.177Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.49.2:2379"}
	{"level":"warn","ts":"2022-06-01T11:14:25.280Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"275.701314ms","expected-duration":"100ms","prefix":"","request":"header:<ID:8128013397418876628 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/leases/kube-node-lease/default-k8s-different-port-20220601110654-6708\" mod_revision:621 > success:<request_put:<key:\"/registry/leases/kube-node-lease/default-k8s-different-port-20220601110654-6708\" value_size:588 >> failure:<request_range:<key:\"/registry/leases/kube-node-lease/default-k8s-different-port-20220601110654-6708\" > >>","response":"size:16"}
	{"level":"info","ts":"2022-06-01T11:14:25.280Z","caller":"traceutil/trace.go:171","msg":"trace[249815246] linearizableReadLoop","detail":"{readStateIndex:724; appliedIndex:723; }","duration":"194.762209ms","start":"2022-06-01T11:14:25.085Z","end":"2022-06-01T11:14:25.280Z","steps":["trace[249815246] 'read index received'  (duration: 13.948823ms)","trace[249815246] 'applied index is now lower than readState.Index'  (duration: 180.811748ms)"],"step_count":2}
	{"level":"info","ts":"2022-06-01T11:14:25.280Z","caller":"traceutil/trace.go:171","msg":"trace[1228048321] transaction","detail":"{read_only:false; response_revision:623; number_of_response:1; }","duration":"289.883037ms","start":"2022-06-01T11:14:24.990Z","end":"2022-06-01T11:14:25.280Z","steps":["trace[1228048321] 'process raft request'  (duration: 13.589499ms)","trace[1228048321] 'compare'  (duration: 275.590014ms)"],"step_count":2}
	{"level":"warn","ts":"2022-06-01T11:14:25.280Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"194.895257ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2022-06-01T11:14:25.280Z","caller":"traceutil/trace.go:171","msg":"trace[64683920] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:623; }","duration":"194.930063ms","start":"2022-06-01T11:14:25.085Z","end":"2022-06-01T11:14:25.280Z","steps":["trace[64683920] 'agreement among raft nodes before linearized reading'  (duration: 194.877912ms)"],"step_count":1}
	{"level":"info","ts":"2022-06-01T11:17:16.484Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":585}
	{"level":"info","ts":"2022-06-01T11:17:16.485Z","caller":"mvcc/kvstore_compaction.go:57","msg":"finished scheduled compaction","compact-revision":585,"took":"542.037µs"}
	
	* 
	* ==> kernel <==
	*  11:19:39 up  1:02,  0 users,  load average: 0.38, 1.98, 2.07
	Linux default-k8s-different-port-20220601110654-6708 5.13.0-1027-gcp #32~20.04.1-Ubuntu SMP Thu May 26 10:53:08 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.4 LTS"
	
	* 
	* ==> kube-apiserver [f9746f111b56ac0244039e7b095ebf29999ccb20c1bf5e1ba484273bdb9e0d90] <==
	* I0601 11:07:18.453282       1 shared_informer.go:247] Caches are synced for crd-autoregister 
	I0601 11:07:18.453289       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0601 11:07:18.453299       1 cache.go:39] Caches are synced for autoregister controller
	I0601 11:07:18.453427       1 shared_informer.go:247] Caches are synced for cluster_authentication_trust_controller 
	I0601 11:07:18.454162       1 apf_controller.go:322] Running API Priority and Fairness config worker
	I0601 11:07:18.464230       1 controller.go:611] quota admission added evaluator for: namespaces
	I0601 11:07:19.313010       1 controller.go:132] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I0601 11:07:19.313033       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0601 11:07:19.318632       1 storage_scheduling.go:93] created PriorityClass system-node-critical with value 2000001000
	I0601 11:07:19.321753       1 storage_scheduling.go:93] created PriorityClass system-cluster-critical with value 2000000000
	I0601 11:07:19.321788       1 storage_scheduling.go:109] all system priority classes are created successfully or already exist.
	I0601 11:07:19.672421       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0601 11:07:19.701304       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0601 11:07:19.786756       1 alloc.go:329] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1]
	W0601 11:07:19.792151       1 lease.go:233] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I0601 11:07:19.793209       1 controller.go:611] quota admission added evaluator for: endpoints
	I0601 11:07:19.796644       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0601 11:07:20.164772       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0601 11:07:20.480504       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0601 11:07:21.468664       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0601 11:07:21.475420       1 alloc.go:329] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I0601 11:07:21.484951       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0601 11:07:33.885430       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	I0601 11:07:34.285929       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	I0601 11:07:34.903429       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	
	* 
	* ==> kube-controller-manager [627fd5c08820c1666f69a50ff3cc02a6eee73709048b46a0243143bf89dde787] <==
	* I0601 11:07:33.334022       1 node_lifecycle_controller.go:1163] Controller detected that all Nodes are not-Ready. Entering master disruption mode.
	I0601 11:07:33.334139       1 event.go:294] "Event occurred" object="default-k8s-different-port-20220601110654-6708" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node default-k8s-different-port-20220601110654-6708 event: Registered Node default-k8s-different-port-20220601110654-6708 in Controller"
	I0601 11:07:33.340437       1 shared_informer.go:247] Caches are synced for certificate-csrsigning-kubelet-client 
	I0601 11:07:33.340465       1 shared_informer.go:247] Caches are synced for certificate-csrsigning-kubelet-serving 
	I0601 11:07:33.342708       1 shared_informer.go:247] Caches are synced for namespace 
	I0601 11:07:33.370084       1 shared_informer.go:247] Caches are synced for service account 
	I0601 11:07:33.410234       1 shared_informer.go:247] Caches are synced for expand 
	I0601 11:07:33.416497       1 shared_informer.go:247] Caches are synced for ephemeral 
	I0601 11:07:33.464111       1 shared_informer.go:247] Caches are synced for persistent volume 
	I0601 11:07:33.474301       1 shared_informer.go:247] Caches are synced for PVC protection 
	I0601 11:07:33.482924       1 shared_informer.go:247] Caches are synced for daemon sets 
	I0601 11:07:33.484099       1 shared_informer.go:247] Caches are synced for attach detach 
	I0601 11:07:33.522810       1 shared_informer.go:247] Caches are synced for stateful set 
	I0601 11:07:33.526980       1 shared_informer.go:247] Caches are synced for resource quota 
	I0601 11:07:33.535611       1 shared_informer.go:247] Caches are synced for resource quota 
	I0601 11:07:33.887240       1 event.go:294] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-64897985d to 2"
	I0601 11:07:33.937891       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0601 11:07:33.937920       1 garbagecollector.go:155] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0601 11:07:33.958070       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0601 11:07:34.291990       1 event.go:294] "Event occurred" object="kube-system/kindnet" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-7fspq"
	I0601 11:07:34.293024       1 event.go:294] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-slzcl"
	I0601 11:07:34.337886       1 event.go:294] "Event occurred" object="kube-system/coredns-64897985d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-64897985d-zbtdx"
	I0601 11:07:34.342039       1 event.go:294] "Event occurred" object="kube-system/coredns-64897985d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-64897985d-9gcj2"
	I0601 11:07:34.693996       1 event.go:294] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-64897985d to 1"
	I0601 11:07:34.702363       1 event.go:294] "Event occurred" object="kube-system/coredns-64897985d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-64897985d-zbtdx"
	
	* 
	* ==> kube-proxy [313035e9674ffbf03bc7f81b4786fb43b0ffff7b5720ab0e88a6bb8f52d6087d] <==
	* I0601 11:07:34.878114       1 node.go:163] Successfully retrieved node IP: 192.168.49.2
	I0601 11:07:34.878163       1 server_others.go:138] "Detected node IP" address="192.168.49.2"
	I0601 11:07:34.878197       1 server_others.go:561] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0601 11:07:34.900526       1 server_others.go:206] "Using iptables Proxier"
	I0601 11:07:34.900564       1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0601 11:07:34.900573       1 server_others.go:214] "Creating dualStackProxier for iptables"
	I0601 11:07:34.900595       1 server_others.go:491] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
	I0601 11:07:34.900961       1 server.go:656] "Version info" version="v1.23.6"
	I0601 11:07:34.901514       1 config.go:226] "Starting endpoint slice config controller"
	I0601 11:07:34.901535       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0601 11:07:34.901567       1 config.go:317] "Starting service config controller"
	I0601 11:07:34.901573       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0601 11:07:35.002527       1 shared_informer.go:247] Caches are synced for service config 
	I0601 11:07:35.002535       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	
	* 
	* ==> kube-scheduler [0b15aeee4f5511a740a4f3f9ebbba9758b6b511b072da616c68a21c118c7790e] <==
	* W0601 11:07:18.472752       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0601 11:07:18.472806       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0601 11:07:18.472922       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0601 11:07:18.473037       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0601 11:07:18.473083       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0601 11:07:18.473043       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0601 11:07:18.472942       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0601 11:07:18.473159       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0601 11:07:18.473644       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0601 11:07:18.473712       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0601 11:07:18.473647       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0601 11:07:18.473764       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0601 11:07:18.475610       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0601 11:07:18.475814       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0601 11:07:19.293620       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0601 11:07:19.293655       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0601 11:07:19.295513       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0601 11:07:19.295539       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0601 11:07:19.320706       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0601 11:07:19.320741       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0601 11:07:19.376036       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0601 11:07:19.376074       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0601 11:07:19.399236       1 reflector.go:324] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0601 11:07:19.399272       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0601 11:07:22.265287       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Wed 2022-06-01 11:07:03 UTC, end at Wed 2022-06-01 11:19:39 UTC. --
	Jun 01 11:18:35 default-k8s-different-port-20220601110654-6708 kubelet[1317]: I0601 11:18:35.584173    1317 scope.go:110] "RemoveContainer" containerID="937c5d1645be40fa7ab5a0a0f9569191312b4b62cad550f4d1269339e22ade76"
	Jun 01 11:18:35 default-k8s-different-port-20220601110654-6708 kubelet[1317]: E0601 11:18:35.584481    1317 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kindnet-cni pod=kindnet-7fspq_kube-system(eefcd8e6-51e4-4d48-a420-93f4b47cf732)\"" pod="kube-system/kindnet-7fspq" podUID=eefcd8e6-51e4-4d48-a420-93f4b47cf732
	Jun 01 11:18:36 default-k8s-different-port-20220601110654-6708 kubelet[1317]: E0601 11:18:36.962867    1317 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jun 01 11:18:41 default-k8s-different-port-20220601110654-6708 kubelet[1317]: E0601 11:18:41.964090    1317 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jun 01 11:18:46 default-k8s-different-port-20220601110654-6708 kubelet[1317]: E0601 11:18:46.965181    1317 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jun 01 11:18:50 default-k8s-different-port-20220601110654-6708 kubelet[1317]: I0601 11:18:50.585106    1317 scope.go:110] "RemoveContainer" containerID="937c5d1645be40fa7ab5a0a0f9569191312b4b62cad550f4d1269339e22ade76"
	Jun 01 11:18:50 default-k8s-different-port-20220601110654-6708 kubelet[1317]: E0601 11:18:50.585387    1317 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kindnet-cni pod=kindnet-7fspq_kube-system(eefcd8e6-51e4-4d48-a420-93f4b47cf732)\"" pod="kube-system/kindnet-7fspq" podUID=eefcd8e6-51e4-4d48-a420-93f4b47cf732
	Jun 01 11:18:51 default-k8s-different-port-20220601110654-6708 kubelet[1317]: E0601 11:18:51.966429    1317 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jun 01 11:18:56 default-k8s-different-port-20220601110654-6708 kubelet[1317]: E0601 11:18:56.967708    1317 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jun 01 11:19:01 default-k8s-different-port-20220601110654-6708 kubelet[1317]: E0601 11:19:01.968887    1317 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jun 01 11:19:04 default-k8s-different-port-20220601110654-6708 kubelet[1317]: I0601 11:19:04.584531    1317 scope.go:110] "RemoveContainer" containerID="937c5d1645be40fa7ab5a0a0f9569191312b4b62cad550f4d1269339e22ade76"
	Jun 01 11:19:04 default-k8s-different-port-20220601110654-6708 kubelet[1317]: E0601 11:19:04.584937    1317 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kindnet-cni pod=kindnet-7fspq_kube-system(eefcd8e6-51e4-4d48-a420-93f4b47cf732)\"" pod="kube-system/kindnet-7fspq" podUID=eefcd8e6-51e4-4d48-a420-93f4b47cf732
	Jun 01 11:19:06 default-k8s-different-port-20220601110654-6708 kubelet[1317]: E0601 11:19:06.969644    1317 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jun 01 11:19:11 default-k8s-different-port-20220601110654-6708 kubelet[1317]: E0601 11:19:11.971217    1317 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jun 01 11:19:15 default-k8s-different-port-20220601110654-6708 kubelet[1317]: I0601 11:19:15.584386    1317 scope.go:110] "RemoveContainer" containerID="937c5d1645be40fa7ab5a0a0f9569191312b4b62cad550f4d1269339e22ade76"
	Jun 01 11:19:16 default-k8s-different-port-20220601110654-6708 kubelet[1317]: E0601 11:19:16.972574    1317 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jun 01 11:19:21 default-k8s-different-port-20220601110654-6708 kubelet[1317]: E0601 11:19:21.973658    1317 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jun 01 11:19:26 default-k8s-different-port-20220601110654-6708 kubelet[1317]: I0601 11:19:26.563628    1317 scope.go:110] "RemoveContainer" containerID="937c5d1645be40fa7ab5a0a0f9569191312b4b62cad550f4d1269339e22ade76"
	Jun 01 11:19:26 default-k8s-different-port-20220601110654-6708 kubelet[1317]: I0601 11:19:26.929824    1317 scope.go:110] "RemoveContainer" containerID="fec5fcb4aee4a111d94d448605edbdddd36bb33e1c2c06fad87be11f5157efdd"
	Jun 01 11:19:26 default-k8s-different-port-20220601110654-6708 kubelet[1317]: E0601 11:19:26.930069    1317 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kindnet-cni pod=kindnet-7fspq_kube-system(eefcd8e6-51e4-4d48-a420-93f4b47cf732)\"" pod="kube-system/kindnet-7fspq" podUID=eefcd8e6-51e4-4d48-a420-93f4b47cf732
	Jun 01 11:19:26 default-k8s-different-port-20220601110654-6708 kubelet[1317]: E0601 11:19:26.974424    1317 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jun 01 11:19:31 default-k8s-different-port-20220601110654-6708 kubelet[1317]: E0601 11:19:31.975965    1317 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jun 01 11:19:36 default-k8s-different-port-20220601110654-6708 kubelet[1317]: E0601 11:19:36.977187    1317 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jun 01 11:19:37 default-k8s-different-port-20220601110654-6708 kubelet[1317]: I0601 11:19:37.584389    1317 scope.go:110] "RemoveContainer" containerID="fec5fcb4aee4a111d94d448605edbdddd36bb33e1c2c06fad87be11f5157efdd"
	Jun 01 11:19:37 default-k8s-different-port-20220601110654-6708 kubelet[1317]: E0601 11:19:37.584647    1317 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kindnet-cni pod=kindnet-7fspq_kube-system(eefcd8e6-51e4-4d48-a420-93f4b47cf732)\"" pod="kube-system/kindnet-7fspq" podUID=eefcd8e6-51e4-4d48-a420-93f4b47cf732
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-different-port-20220601110654-6708 -n default-k8s-different-port-20220601110654-6708
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-different-port-20220601110654-6708 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:270: non-running pods: busybox coredns-64897985d-9gcj2 storage-provisioner
helpers_test.go:272: ======> post-mortem[TestStartStop/group/default-k8s-different-port/serial/DeployApp]: describe non-running pods <======
helpers_test.go:275: (dbg) Run:  kubectl --context default-k8s-different-port-20220601110654-6708 describe pod busybox coredns-64897985d-9gcj2 storage-provisioner
helpers_test.go:275: (dbg) Non-zero exit: kubectl --context default-k8s-different-port-20220601110654-6708 describe pod busybox coredns-64897985d-9gcj2 storage-provisioner: exit status 1 (59.863376ms)

                                                
                                                
-- stdout --
	Name:         busybox
	Namespace:    default
	Priority:     0
	Node:         <none>
	Labels:       integration-test=busybox
	Annotations:  <none>
	Status:       Pending
	IP:           
	IPs:          <none>
	Containers:
	  busybox:
	    Image:      gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sleep
	      3600
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-c9mjz (ro)
	Conditions:
	  Type           Status
	  PodScheduled   False 
	Volumes:
	  kube-api-access-c9mjz:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason            Age                 From               Message
	  ----     ------            ----                ----               -------
	  Warning  FailedScheduling  47s (x8 over 8m2s)  default-scheduler  0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "coredns-64897985d-9gcj2" not found
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:277: kubectl --context default-k8s-different-port-20220601110654-6708 describe pod busybox coredns-64897985d-9gcj2 storage-provisioner: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/default-k8s-different-port/serial/DeployApp]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect default-k8s-different-port-20220601110654-6708
helpers_test.go:235: (dbg) docker inspect default-k8s-different-port-20220601110654-6708:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "dccf9935a74c10fb8fa207fbc849bba86fe9f8dff98b2051cc49fbfa90e4ec8b",
	        "Created": "2022-06-01T11:07:03.290503902Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 245161,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-06-01T11:07:03.630929291Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:5fc9565d342f677dd8987c0c7656d8d58147ab45c932c9076935b38a770e4cf7",
	        "ResolvConfPath": "/var/lib/docker/containers/dccf9935a74c10fb8fa207fbc849bba86fe9f8dff98b2051cc49fbfa90e4ec8b/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/dccf9935a74c10fb8fa207fbc849bba86fe9f8dff98b2051cc49fbfa90e4ec8b/hostname",
	        "HostsPath": "/var/lib/docker/containers/dccf9935a74c10fb8fa207fbc849bba86fe9f8dff98b2051cc49fbfa90e4ec8b/hosts",
	        "LogPath": "/var/lib/docker/containers/dccf9935a74c10fb8fa207fbc849bba86fe9f8dff98b2051cc49fbfa90e4ec8b/dccf9935a74c10fb8fa207fbc849bba86fe9f8dff98b2051cc49fbfa90e4ec8b-json.log",
	        "Name": "/default-k8s-different-port-20220601110654-6708",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-different-port-20220601110654-6708:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default-k8s-different-port-20220601110654-6708",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/ec30b29d13ab7f3810bf40e3fa416d096637b34a6f7b5a750bd7d391c0a4008e-init/diff:/var/lib/docker/overlay2/b8d8db1a2aa7e68bc30cc8784920412f67de75d287f26f41df14fe77e9cd01aa/diff:/var/lib/docker/overlay2/e05914bc392f34941f075ddf1f08984ba653e41980a3a12b3f0ec7bc66faed2c/diff:/var/lib/docker/overlay2/311290be1e68cfaf248422398f2ee037b662e83e8a80c393cf1f35af46c871e3/diff:/var/lib/docker/overlay2/df5bd86ed0e0c9740e1a389f0dd435837b730043ff123ee198aa28f82511f21c/diff:/var/lib/docker/overlay2/52ebf0baed5a1f4378d84fd6b94c9c1756fef7c7837d0766f77ef29b2104f39c/diff:/var/lib/docker/overlay2/19fdc64f1948c51309fb85761a84e7787a06f91f43e3d554c0507c5291bda802/diff:/var/lib/docker/overlay2/8fbe94a5f5ec7e7b87e3294d917aaba04c0d5a1072a4876ca3171f7b416e78d1/diff:/var/lib/docker/overlay2/3f2aec9d3b6e64c9251bd84e36639908a944a57741768984d5a66f5f34ed2288/diff:/var/lib/docker/overlay2/f4b59ae0a52d88a0325a281689c43f769408e14f70b9c94b771e70af140d7538/diff:/var/lib/docker/overlay2/1b9610
0ef86fc38fba7ce796c89f6a0f0c16c7fc1a94ff1a7f2021a01dd5471e/diff:/var/lib/docker/overlay2/10388fac28961ac0e67628b98602c8c77b04d12b21cd25a8d9adc05c1261252b/diff:/var/lib/docker/overlay2/efcc44ba0e0e6acd23e74db49106211785c2519f22b858252b43b17e927095e4/diff:/var/lib/docker/overlay2/3dbc9b9ec4e689c631fadb88ec67ad1f44f1059642f0d9e9740fa4523681476a/diff:/var/lib/docker/overlay2/196519dbdfaceb51fe77bd0517b4a726c63133e2f1a44cc71539d859f920996f/diff:/var/lib/docker/overlay2/bf269014ddb2ded291bc1f7a299a168567e8ffb016d7d4ba5ad3681eb1c988ba/diff:/var/lib/docker/overlay2/dc3a1c86dd14e2cba77aa4a9424d61aa34efc817c2c14b8f79d66fb1c19132ea/diff:/var/lib/docker/overlay2/7151cfa81a30a8e2bacf0e7e97eb21e825844e9ee99d4d774c443cf4df3042bf/diff:/var/lib/docker/overlay2/4827c7b9079e215b353e51dd539d7e28bf73341ea0a494df4654e9fd1b53d16c/diff:/var/lib/docker/overlay2/2da0eda04306aaeacd19a5cc85b885230b88e74f1791bdb022ebf4b39d85fae2/diff:/var/lib/docker/overlay2/1a0bdf8971fb0a44ff0e168623dbbb2d84b8e93f6d20d9351efd68470e4d4851/diff:/var/lib/d
ocker/overlay2/9ced6f582fc4ce00064f1f5e6ac922d838648fe94afc8e015c309e04004f10ca/diff:/var/lib/docker/overlay2/dd6d4c3166eb565aff2516953d5a8877a204214f2436d414475132ae70429cf7/diff:/var/lib/docker/overlay2/a1ace060e85891d54b26ff4be9fcbce36ffbb15cc4061eb4ccf0add8f82783df/diff:/var/lib/docker/overlay2/bc8b93bfba93e7da2c573ae8b6405ebff526153f6a8b0659aebaf044dc7e8f43/diff:/var/lib/docker/overlay2/c6292624b658b5761ddc277e4b50f1bd9d32fb9a2ad4a01647d6482fa0d29eb3/diff:/var/lib/docker/overlay2/cfe8f35eeb0a80a3c747eac0dfd9195da1fa3d9f92f0b7866d46f3517f3b10ee/diff:/var/lib/docker/overlay2/90bc3b9378b5761de7575f1a82d48e4b4ebf50af153eafa5a565585e136b87f8/diff:/var/lib/docker/overlay2/e1b928a2483870df7fdf4adb3b4002f9effe1db7fbff925b24005f47290b2915/diff:/var/lib/docker/overlay2/4758c75ab63fd3f43ae7b654bc93566ded69acea0e92caf3043ef6eeeec9ca1b/diff:/var/lib/docker/overlay2/8558da5809877030d87982052e5011fb04b8bb9646d0f3c1d4aa10f2d7926592/diff:/var/lib/docker/overlay2/f6400cfae811f55736f55064c8949e18ac2dc1175a5149bb0382a79fa92
4f3f3/diff:/var/lib/docker/overlay2/875d5278ff8445b84261d4586c6a14fbbd9c13beff1fe9252591162e4d91a0dc/diff:/var/lib/docker/overlay2/12d9f85a229e1386e37fb609020fdcb5535c67ce811012fd646182559e4ee754/diff:/var/lib/docker/overlay2/d5c5b85272d7a8b0b62da488c8cba8d883a0adcfc9f1b2f6ad2f856b4f13e5f7/diff:/var/lib/docker/overlay2/5f6feb9e059c22491e287804f38f097fda984dc9376ba28ae810e13dcf27394f/diff:/var/lib/docker/overlay2/113a715b1135f09b959295e960aeaa36846ad54a6fe46fdd53d061bc3fe114e3/diff:/var/lib/docker/overlay2/265096a57a98130b8073aa41d0a22f0ada5a391943e490ac1c281a634e22cba0/diff:/var/lib/docker/overlay2/15f57e6dc9a6b382240a9335aae067454048b2eb6d0094f2d3c8c115be34311a/diff:/var/lib/docker/overlay2/45ca8d44d46a41b4493f52c0048d08b8d5ff4c1b28233ab8940f6422df1f1273/diff:/var/lib/docker/overlay2/d555005a13c251ef928389a1b06d251a17378cf3ec68af5a4d6c849412d3f69f/diff:/var/lib/docker/overlay2/80b8c06850dacfe2ca4db4e0fa4d2d0dd6997cf495a7f7da9b95a996694144c5/diff",
	                "MergedDir": "/var/lib/docker/overlay2/ec30b29d13ab7f3810bf40e3fa416d096637b34a6f7b5a750bd7d391c0a4008e/merged",
	                "UpperDir": "/var/lib/docker/overlay2/ec30b29d13ab7f3810bf40e3fa416d096637b34a6f7b5a750bd7d391c0a4008e/diff",
	                "WorkDir": "/var/lib/docker/overlay2/ec30b29d13ab7f3810bf40e3fa416d096637b34a6f7b5a750bd7d391c0a4008e/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "default-k8s-different-port-20220601110654-6708",
	                "Source": "/var/lib/docker/volumes/default-k8s-different-port-20220601110654-6708/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-different-port-20220601110654-6708",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-different-port-20220601110654-6708",
	                "name.minikube.sigs.k8s.io": "default-k8s-different-port-20220601110654-6708",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "7855192596bd9f60fe4ad2cd96f599cd40d7bd62bfad35d8e1f5a897e3270f06",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49417"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49416"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49413"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49415"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49414"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/7855192596bd",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-different-port-20220601110654-6708": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "dccf9935a74c",
	                        "default-k8s-different-port-20220601110654-6708"
	                    ],
	                    "NetworkID": "7d52ef0dc0855b59c05da2e66b25f4d0866ad1d653be1fa615e193dd86443771",
	                    "EndpointID": "333c0952bde2fd448463a8d5d563d8e8c8448f605be2cf7fffa411011fe20066",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-different-port-20220601110654-6708 -n default-k8s-different-port-20220601110654-6708
helpers_test.go:244: <<< TestStartStop/group/default-k8s-different-port/serial/DeployApp FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-different-port/serial/DeployApp]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-different-port-20220601110654-6708 logs -n 25
helpers_test.go:252: TestStartStop/group/default-k8s-different-port/serial/DeployApp logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------------------------------|------------------------------------------------|---------|----------------|---------------------|---------------------|
	| Command |                            Args                            |                    Profile                     |  User   |    Version     |     Start Time      |      End Time       |
	|---------|------------------------------------------------------------|------------------------------------------------|---------|----------------|---------------------|---------------------|
	| logs    | embed-certs-20220601110327-6708                            | embed-certs-20220601110327-6708                | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:08 UTC | 01 Jun 22 11:08 UTC |
	|         | logs -n 25                                                 |                                                |         |                |                     |                     |
	| logs    | default-k8s-different-port-20220601110654-6708             | default-k8s-different-port-20220601110654-6708 | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:11 UTC | 01 Jun 22 11:11 UTC |
	|         | logs -n 25                                                 |                                                |         |                |                     |                     |
	| logs    | old-k8s-version-20220601105850-6708                        | old-k8s-version-20220601105850-6708            | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:11 UTC | 01 Jun 22 11:11 UTC |
	|         | logs -n 25                                                 |                                                |         |                |                     |                     |
	| logs    | old-k8s-version-20220601105850-6708                        | old-k8s-version-20220601105850-6708            | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:11 UTC | 01 Jun 22 11:11 UTC |
	|         | logs -n 25                                                 |                                                |         |                |                     |                     |
	| addons  | enable metrics-server -p                                   | old-k8s-version-20220601105850-6708            | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:11 UTC | 01 Jun 22 11:11 UTC |
	|         | old-k8s-version-20220601105850-6708                        |                                                |         |                |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                                                |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                     |                                                |         |                |                     |                     |
	| stop    | -p                                                         | old-k8s-version-20220601105850-6708            | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:11 UTC | 01 Jun 22 11:11 UTC |
	|         | old-k8s-version-20220601105850-6708                        |                                                |         |                |                     |                     |
	|         | --alsologtostderr -v=3                                     |                                                |         |                |                     |                     |
	| addons  | enable dashboard -p                                        | old-k8s-version-20220601105850-6708            | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:11 UTC | 01 Jun 22 11:11 UTC |
	|         | old-k8s-version-20220601105850-6708                        |                                                |         |                |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                |         |                |                     |                     |
	| logs    | calico-20220601104839-6708                                 | calico-20220601104839-6708                     | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:14 UTC | 01 Jun 22 11:14 UTC |
	|         | logs -n 25                                                 |                                                |         |                |                     |                     |
	| delete  | -p calico-20220601104839-6708                              | calico-20220601104839-6708                     | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:14 UTC | 01 Jun 22 11:14 UTC |
	| start   | -p newest-cni-20220601111420-6708 --memory=2200            | newest-cni-20220601111420-6708                 | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:14 UTC | 01 Jun 22 11:15 UTC |
	|         | --alsologtostderr --wait=apiserver,system_pods,default_sa  |                                                |         |                |                     |                     |
	|         | --feature-gates ServerSideApply=true --network-plugin=cni  |                                                |         |                |                     |                     |
	|         | --extra-config=kubelet.network-plugin=cni                  |                                                |         |                |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |                                                |         |                |                     |                     |
	|         | --driver=docker  --container-runtime=containerd            |                                                |         |                |                     |                     |
	|         | --kubernetes-version=v1.23.6                               |                                                |         |                |                     |                     |
	| addons  | enable metrics-server -p                                   | newest-cni-20220601111420-6708                 | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:15 UTC | 01 Jun 22 11:15 UTC |
	|         | newest-cni-20220601111420-6708                             |                                                |         |                |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                                                |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                     |                                                |         |                |                     |                     |
	| stop    | -p                                                         | newest-cni-20220601111420-6708                 | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:15 UTC | 01 Jun 22 11:15 UTC |
	|         | newest-cni-20220601111420-6708                             |                                                |         |                |                     |                     |
	|         | --alsologtostderr -v=3                                     |                                                |         |                |                     |                     |
	| addons  | enable dashboard -p                                        | newest-cni-20220601111420-6708                 | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:15 UTC | 01 Jun 22 11:15 UTC |
	|         | newest-cni-20220601111420-6708                             |                                                |         |                |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                |         |                |                     |                     |
	| start   | -p newest-cni-20220601111420-6708 --memory=2200            | newest-cni-20220601111420-6708                 | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:15 UTC | 01 Jun 22 11:15 UTC |
	|         | --alsologtostderr --wait=apiserver,system_pods,default_sa  |                                                |         |                |                     |                     |
	|         | --feature-gates ServerSideApply=true --network-plugin=cni  |                                                |         |                |                     |                     |
	|         | --extra-config=kubelet.network-plugin=cni                  |                                                |         |                |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |                                                |         |                |                     |                     |
	|         | --driver=docker  --container-runtime=containerd            |                                                |         |                |                     |                     |
	|         | --kubernetes-version=v1.23.6                               |                                                |         |                |                     |                     |
	| ssh     | -p                                                         | newest-cni-20220601111420-6708                 | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:15 UTC | 01 Jun 22 11:15 UTC |
	|         | newest-cni-20220601111420-6708                             |                                                |         |                |                     |                     |
	|         | sudo crictl images -o json                                 |                                                |         |                |                     |                     |
	| pause   | -p                                                         | newest-cni-20220601111420-6708                 | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:15 UTC | 01 Jun 22 11:15 UTC |
	|         | newest-cni-20220601111420-6708                             |                                                |         |                |                     |                     |
	|         | --alsologtostderr -v=1                                     |                                                |         |                |                     |                     |
	| unpause | -p                                                         | newest-cni-20220601111420-6708                 | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:15 UTC | 01 Jun 22 11:16 UTC |
	|         | newest-cni-20220601111420-6708                             |                                                |         |                |                     |                     |
	|         | --alsologtostderr -v=1                                     |                                                |         |                |                     |                     |
	| delete  | -p                                                         | newest-cni-20220601111420-6708                 | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:16 UTC | 01 Jun 22 11:16 UTC |
	|         | newest-cni-20220601111420-6708                             |                                                |         |                |                     |                     |
	| delete  | -p                                                         | newest-cni-20220601111420-6708                 | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:16 UTC | 01 Jun 22 11:16 UTC |
	|         | newest-cni-20220601111420-6708                             |                                                |         |                |                     |                     |
	| logs    | embed-certs-20220601110327-6708                            | embed-certs-20220601110327-6708                | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:16 UTC | 01 Jun 22 11:16 UTC |
	|         | logs -n 25                                                 |                                                |         |                |                     |                     |
	| logs    | embed-certs-20220601110327-6708                            | embed-certs-20220601110327-6708                | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:16 UTC | 01 Jun 22 11:16 UTC |
	|         | logs -n 25                                                 |                                                |         |                |                     |                     |
	| addons  | enable metrics-server -p                                   | embed-certs-20220601110327-6708                | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:16 UTC | 01 Jun 22 11:16 UTC |
	|         | embed-certs-20220601110327-6708                            |                                                |         |                |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                                                |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                     |                                                |         |                |                     |                     |
	| stop    | -p                                                         | embed-certs-20220601110327-6708                | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:16 UTC | 01 Jun 22 11:16 UTC |
	|         | embed-certs-20220601110327-6708                            |                                                |         |                |                     |                     |
	|         | --alsologtostderr -v=3                                     |                                                |         |                |                     |                     |
	| addons  | enable dashboard -p                                        | embed-certs-20220601110327-6708                | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:16 UTC | 01 Jun 22 11:16 UTC |
	|         | embed-certs-20220601110327-6708                            |                                                |         |                |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                |         |                |                     |                     |
	| logs    | default-k8s-different-port-20220601110654-6708             | default-k8s-different-port-20220601110654-6708 | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:19 UTC | 01 Jun 22 11:19 UTC |
	|         | logs -n 25                                                 |                                                |         |                |                     |                     |
	|---------|------------------------------------------------------------|------------------------------------------------|---------|----------------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/06/01 11:16:27
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.18.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0601 11:16:27.030025  270029 out.go:296] Setting OutFile to fd 1 ...
	I0601 11:16:27.030200  270029 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0601 11:16:27.030210  270029 out.go:309] Setting ErrFile to fd 2...
	I0601 11:16:27.030214  270029 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0601 11:16:27.030316  270029 root.go:322] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/bin
	I0601 11:16:27.030590  270029 out.go:303] Setting JSON to false
	I0601 11:16:27.032104  270029 start.go:115] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":3541,"bootTime":1654078646,"procs":726,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.13.0-1027-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0601 11:16:27.032160  270029 start.go:125] virtualization: kvm guest
	I0601 11:16:27.034601  270029 out.go:177] * [embed-certs-20220601110327-6708] minikube v1.26.0-beta.1 on Ubuntu 20.04 (kvm/amd64)
	I0601 11:16:27.036027  270029 out.go:177]   - MINIKUBE_LOCATION=14079
	I0601 11:16:27.035970  270029 notify.go:193] Checking for updates...
	I0601 11:16:27.037352  270029 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0601 11:16:27.038882  270029 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig
	I0601 11:16:27.040231  270029 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube
	I0601 11:16:27.041542  270029 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0601 11:16:27.043240  270029 config.go:178] Loaded profile config "embed-certs-20220601110327-6708": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.6
	I0601 11:16:27.043659  270029 driver.go:358] Setting default libvirt URI to qemu:///system
	I0601 11:16:27.081227  270029 docker.go:137] docker version: linux-20.10.16
	I0601 11:16:27.081310  270029 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0601 11:16:27.182938  270029 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:76 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:44 OomKillDisable:true NGoroutines:44 SystemTime:2022-06-01 11:16:27.109556043 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1027-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662795776 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:20.10.16 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0601 11:16:27.183042  270029 docker.go:254] overlay module found
	I0601 11:16:27.185912  270029 out.go:177] * Using the docker driver based on existing profile
	I0601 11:16:27.187159  270029 start.go:284] selected driver: docker
	I0601 11:16:27.187172  270029 start.go:806] validating driver "docker" against &{Name:embed-certs-20220601110327-6708 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:embed-certs-20220601110327-6708 Namespace:d
efault APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.5.1@sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTim
eout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0601 11:16:27.187275  270029 start.go:817] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0601 11:16:27.188164  270029 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0601 11:16:27.287572  270029 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:76 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:44 OomKillDisable:true NGoroutines:44 SystemTime:2022-06-01 11:16:27.216523745 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1027-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662795776 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:20.10.16 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0601 11:16:27.287846  270029 start_flags.go:847] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0601 11:16:27.287899  270029 cni.go:95] Creating CNI manager for ""
	I0601 11:16:27.287909  270029 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0601 11:16:27.287923  270029 start_flags.go:306] config:
	{Name:embed-certs-20220601110327-6708 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:embed-certs-20220601110327-6708 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:clus
ter.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.5.1@sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: Mu
ltiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0601 11:16:27.290349  270029 out.go:177] * Starting control plane node embed-certs-20220601110327-6708 in cluster embed-certs-20220601110327-6708
	I0601 11:16:27.291691  270029 cache.go:120] Beginning downloading kic base image for docker with containerd
	I0601 11:16:27.292997  270029 out.go:177] * Pulling base image ...
	I0601 11:16:27.294363  270029 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime containerd
	I0601 11:16:27.294386  270029 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a in local docker daemon
	I0601 11:16:27.294393  270029 preload.go:148] Found local preload: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.6-containerd-overlay2-amd64.tar.lz4
	I0601 11:16:27.294434  270029 cache.go:57] Caching tarball of preloaded images
	I0601 11:16:27.295098  270029 preload.go:174] Found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.6-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0601 11:16:27.295161  270029 cache.go:60] Finished verifying existence of preloaded tar for  v1.23.6 on containerd
	I0601 11:16:27.295359  270029 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/embed-certs-20220601110327-6708/config.json ...
	I0601 11:16:27.338028  270029 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a in local docker daemon, skipping pull
	I0601 11:16:27.338057  270029 cache.go:141] gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a exists in daemon, skipping load
	I0601 11:16:27.338077  270029 cache.go:206] Successfully downloaded all kic artifacts
	I0601 11:16:27.338121  270029 start.go:352] acquiring machines lock for embed-certs-20220601110327-6708: {Name:mk2bc8f54b3ac1967b6e5e724f1be8808370dc1a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0601 11:16:27.338232  270029 start.go:356] acquired machines lock for "embed-certs-20220601110327-6708" in 83.619µs
	I0601 11:16:27.338252  270029 start.go:94] Skipping create...Using existing machine configuration
	I0601 11:16:27.338262  270029 fix.go:55] fixHost starting: 
	I0601 11:16:27.338520  270029 cli_runner.go:164] Run: docker container inspect embed-certs-20220601110327-6708 --format={{.State.Status}}
	I0601 11:16:27.369415  270029 fix.go:103] recreateIfNeeded on embed-certs-20220601110327-6708: state=Stopped err=<nil>
	W0601 11:16:27.369444  270029 fix.go:129] unexpected machine state, will restart: <nil>
	I0601 11:16:27.371758  270029 out.go:177] * Restarting existing docker container for "embed-certs-20220601110327-6708" ...
	I0601 11:16:24.354878  254820 pod_ready.go:102] pod "coredns-5644d7b6d9-5z28m" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 10:59:44 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:16:26.855526  254820 pod_ready.go:102] pod "coredns-5644d7b6d9-5z28m" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 10:59:44 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:16:27.373224  270029 cli_runner.go:164] Run: docker start embed-certs-20220601110327-6708
	I0601 11:16:27.750544  270029 cli_runner.go:164] Run: docker container inspect embed-certs-20220601110327-6708 --format={{.State.Status}}
	I0601 11:16:27.784421  270029 kic.go:416] container "embed-certs-20220601110327-6708" state is running.
	I0601 11:16:27.784842  270029 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-20220601110327-6708
	I0601 11:16:27.816168  270029 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/embed-certs-20220601110327-6708/config.json ...
	I0601 11:16:27.816441  270029 machine.go:88] provisioning docker machine ...
	I0601 11:16:27.816482  270029 ubuntu.go:169] provisioning hostname "embed-certs-20220601110327-6708"
	I0601 11:16:27.816529  270029 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220601110327-6708
	I0601 11:16:27.849760  270029 main.go:134] libmachine: Using SSH client type: native
	I0601 11:16:27.849917  270029 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7da240] 0x7dd2a0 <nil>  [] 0s} 127.0.0.1 49437 <nil> <nil>}
	I0601 11:16:27.849935  270029 main.go:134] libmachine: About to run SSH command:
	sudo hostname embed-certs-20220601110327-6708 && echo "embed-certs-20220601110327-6708" | sudo tee /etc/hostname
	I0601 11:16:27.850521  270029 main.go:134] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:54016->127.0.0.1:49437: read: connection reset by peer
	I0601 11:16:30.976432  270029 main.go:134] libmachine: SSH cmd err, output: <nil>: embed-certs-20220601110327-6708
	
	I0601 11:16:30.976514  270029 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220601110327-6708
	I0601 11:16:31.008861  270029 main.go:134] libmachine: Using SSH client type: native
	I0601 11:16:31.009014  270029 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7da240] 0x7dd2a0 <nil>  [] 0s} 127.0.0.1 49437 <nil> <nil>}
	I0601 11:16:31.009044  270029 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-20220601110327-6708' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-20220601110327-6708/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-20220601110327-6708' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0601 11:16:31.123496  270029 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0601 11:16:31.123529  270029 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/c
erts/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube}
	I0601 11:16:31.123570  270029 ubuntu.go:177] setting up certificates
	I0601 11:16:31.123582  270029 provision.go:83] configureAuth start
	I0601 11:16:31.123653  270029 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-20220601110327-6708
	I0601 11:16:31.154648  270029 provision.go:138] copyHostCerts
	I0601 11:16:31.154711  270029 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.pem, removing ...
	I0601 11:16:31.154718  270029 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.pem
	I0601 11:16:31.154779  270029 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.pem (1082 bytes)
	I0601 11:16:31.154874  270029 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cert.pem, removing ...
	I0601 11:16:31.154884  270029 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cert.pem
	I0601 11:16:31.154907  270029 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cert.pem (1123 bytes)
	I0601 11:16:31.155010  270029 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/key.pem, removing ...
	I0601 11:16:31.155022  270029 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/key.pem
	I0601 11:16:31.155045  270029 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/key.pem (1679 bytes)
	I0601 11:16:31.155086  270029 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca-key.pem org=jenkins.embed-certs-20220601110327-6708 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube embed-certs-20220601110327-6708]
	I0601 11:16:31.392219  270029 provision.go:172] copyRemoteCerts
	I0601 11:16:31.392269  270029 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0601 11:16:31.392296  270029 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220601110327-6708
	I0601 11:16:31.424693  270029 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49437 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/embed-certs-20220601110327-6708/id_rsa Username:docker}
	I0601 11:16:31.507177  270029 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0601 11:16:31.523691  270029 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/server.pem --> /etc/docker/server.pem (1265 bytes)
	I0601 11:16:31.539588  270029 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0601 11:16:31.556391  270029 provision.go:86] duration metric: configureAuth took 432.782419ms
	I0601 11:16:31.556423  270029 ubuntu.go:193] setting minikube options for container-runtime
	I0601 11:16:31.556601  270029 config.go:178] Loaded profile config "embed-certs-20220601110327-6708": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.6
	I0601 11:16:31.556613  270029 machine.go:91] provisioned docker machine in 3.740153286s
	I0601 11:16:31.556620  270029 start.go:306] post-start starting for "embed-certs-20220601110327-6708" (driver="docker")
	I0601 11:16:31.556627  270029 start.go:316] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0601 11:16:31.556665  270029 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0601 11:16:31.556708  270029 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220601110327-6708
	I0601 11:16:31.588692  270029 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49437 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/embed-certs-20220601110327-6708/id_rsa Username:docker}
	I0601 11:16:31.671170  270029 ssh_runner.go:195] Run: cat /etc/os-release
	I0601 11:16:31.673879  270029 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0601 11:16:31.673904  270029 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0601 11:16:31.673913  270029 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0601 11:16:31.673921  270029 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0601 11:16:31.673932  270029 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/addons for local assets ...
	I0601 11:16:31.673995  270029 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files for local assets ...
	I0601 11:16:31.674092  270029 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files/etc/ssl/certs/67082.pem -> 67082.pem in /etc/ssl/certs
	I0601 11:16:31.674203  270029 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0601 11:16:31.680491  270029 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files/etc/ssl/certs/67082.pem --> /etc/ssl/certs/67082.pem (1708 bytes)
	I0601 11:16:31.696768  270029 start.go:309] post-start completed in 140.137646ms
	I0601 11:16:31.696823  270029 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0601 11:16:31.696867  270029 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220601110327-6708
	I0601 11:16:31.728967  270029 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49437 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/embed-certs-20220601110327-6708/id_rsa Username:docker}
	I0601 11:16:31.808592  270029 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0601 11:16:31.813696  270029 fix.go:57] fixHost completed within 4.475428594s
	I0601 11:16:31.813724  270029 start.go:81] releasing machines lock for "embed-certs-20220601110327-6708", held for 4.475478152s
	I0601 11:16:31.813806  270029 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-20220601110327-6708
	I0601 11:16:31.845390  270029 ssh_runner.go:195] Run: systemctl --version
	I0601 11:16:31.845426  270029 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0601 11:16:31.845445  270029 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220601110327-6708
	I0601 11:16:31.845474  270029 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220601110327-6708
	I0601 11:16:31.878841  270029 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49437 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/embed-certs-20220601110327-6708/id_rsa Username:docker}
	I0601 11:16:31.879529  270029 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49437 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/embed-certs-20220601110327-6708/id_rsa Username:docker}
	I0601 11:16:31.984532  270029 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0601 11:16:31.995279  270029 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0601 11:16:32.004188  270029 docker.go:187] disabling docker service ...
	I0601 11:16:32.004230  270029 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0601 11:16:32.013110  270029 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0601 11:16:32.021544  270029 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0601 11:16:29.355547  254820 pod_ready.go:102] pod "coredns-5644d7b6d9-5z28m" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 10:59:44 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:16:31.855558  254820 pod_ready.go:102] pod "coredns-5644d7b6d9-5z28m" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 10:59:44 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:16:32.096568  270029 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0601 11:16:32.177406  270029 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0601 11:16:32.186287  270029 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0601 11:16:32.198554  270029 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*sandbox_image = .*$|sandbox_image = "k8s.gcr.io/pause:3.6"|' -i /etc/containerd/config.toml"
	I0601 11:16:32.206479  270029 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*restrict_oom_score_adj = .*$|restrict_oom_score_adj = false|' -i /etc/containerd/config.toml"
	I0601 11:16:32.214298  270029 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*SystemdCgroup = .*$|SystemdCgroup = false|' -i /etc/containerd/config.toml"
	I0601 11:16:32.221739  270029 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*conf_dir = .*$|conf_dir = "/etc/cni/net.mk"|' -i /etc/containerd/config.toml"
	I0601 11:16:32.229090  270029 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^# imports|imports = ["/etc/containerd/containerd.conf.d/02-containerd.conf"]|' -i /etc/containerd/config.toml"
	I0601 11:16:32.236531  270029 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc/containerd/containerd.conf.d && printf %!s(MISSING) "dmVyc2lvbiA9IDIK" | base64 -d | sudo tee /etc/containerd/containerd.conf.d/02-containerd.conf"
	I0601 11:16:32.248478  270029 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0601 11:16:32.254712  270029 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0601 11:16:32.260784  270029 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0601 11:16:32.332262  270029 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0601 11:16:32.400990  270029 start.go:447] Will wait 60s for socket path /run/containerd/containerd.sock
	I0601 11:16:32.401055  270029 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0601 11:16:32.405246  270029 start.go:468] Will wait 60s for crictl version
	I0601 11:16:32.405339  270029 ssh_runner.go:195] Run: sudo crictl version
	I0601 11:16:32.431671  270029 retry.go:31] will retry after 11.04660288s: Temporary Error: sudo crictl version: Process exited with status 1
	stdout:
	
	stderr:
	time="2022-06-01T11:16:32Z" level=fatal msg="getting the runtime version: rpc error: code = Unknown desc = server is not initialized yet"
	I0601 11:16:33.855768  254820 pod_ready.go:102] pod "coredns-5644d7b6d9-5z28m" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 10:59:44 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:16:35.855971  254820 pod_ready.go:102] pod "coredns-5644d7b6d9-5z28m" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 10:59:44 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:16:38.355081  254820 pod_ready.go:102] pod "coredns-5644d7b6d9-5z28m" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 10:59:44 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:16:43.479123  270029 ssh_runner.go:195] Run: sudo crictl version
	I0601 11:16:43.501672  270029 start.go:477] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.6.4
	RuntimeApiVersion:  v1alpha2
	I0601 11:16:43.501721  270029 ssh_runner.go:195] Run: containerd --version
	I0601 11:16:43.529392  270029 ssh_runner.go:195] Run: containerd --version
	I0601 11:16:43.558583  270029 out.go:177] * Preparing Kubernetes v1.23.6 on containerd 1.6.4 ...
	I0601 11:16:43.560125  270029 cli_runner.go:164] Run: docker network inspect embed-certs-20220601110327-6708 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0601 11:16:43.591406  270029 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I0601 11:16:43.594609  270029 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0601 11:16:43.605543  270029 out.go:177]   - kubelet.cni-conf-dir=/etc/cni/net.mk
	I0601 11:16:40.355331  254820 pod_ready.go:102] pod "coredns-5644d7b6d9-5z28m" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 10:59:44 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:16:42.855945  254820 pod_ready.go:102] pod "coredns-5644d7b6d9-5z28m" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 10:59:44 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:16:43.607033  270029 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime containerd
	I0601 11:16:43.607086  270029 ssh_runner.go:195] Run: sudo crictl images --output json
	I0601 11:16:43.629330  270029 containerd.go:547] all images are preloaded for containerd runtime.
	I0601 11:16:43.629349  270029 containerd.go:461] Images already preloaded, skipping extraction
	I0601 11:16:43.629396  270029 ssh_runner.go:195] Run: sudo crictl images --output json
	I0601 11:16:43.651491  270029 containerd.go:547] all images are preloaded for containerd runtime.
	I0601 11:16:43.651512  270029 cache_images.go:84] Images are preloaded, skipping loading
	I0601 11:16:43.651566  270029 ssh_runner.go:195] Run: sudo crictl info
	I0601 11:16:43.675463  270029 cni.go:95] Creating CNI manager for ""
	I0601 11:16:43.675488  270029 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0601 11:16:43.675505  270029 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0601 11:16:43.675522  270029 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.23.6 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-20220601110327-6708 NodeName:embed-certs-20220601110327-6708 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientC
AFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0601 11:16:43.675702  270029 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "embed-certs-20220601110327-6708"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.23.6
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0601 11:16:43.675851  270029 kubeadm.go:961] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.23.6/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cni-conf-dir=/etc/cni/net.mk --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=embed-certs-20220601110327-6708 --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.76.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.23.6 ClusterName:embed-certs-20220601110327-6708 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0601 11:16:43.675928  270029 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.23.6
	I0601 11:16:43.682788  270029 binaries.go:44] Found k8s binaries, skipping transfer
	I0601 11:16:43.682841  270029 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0601 11:16:43.689239  270029 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (576 bytes)
	I0601 11:16:43.701365  270029 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0601 11:16:43.712899  270029 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2060 bytes)
	I0601 11:16:43.724782  270029 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0601 11:16:43.727472  270029 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0601 11:16:43.736002  270029 certs.go:54] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/embed-certs-20220601110327-6708 for IP: 192.168.76.2
	I0601 11:16:43.736086  270029 certs.go:182] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.key
	I0601 11:16:43.736130  270029 certs.go:182] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/proxy-client-ca.key
	I0601 11:16:43.736196  270029 certs.go:298] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/embed-certs-20220601110327-6708/client.key
	I0601 11:16:43.736241  270029 certs.go:298] skipping minikube signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/embed-certs-20220601110327-6708/apiserver.key.31bdca25
	I0601 11:16:43.736273  270029 certs.go:298] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/embed-certs-20220601110327-6708/proxy-client.key
	I0601 11:16:43.736370  270029 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/6708.pem (1338 bytes)
	W0601 11:16:43.736396  270029 certs.go:384] ignoring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/6708_empty.pem, impossibly tiny 0 bytes
	I0601 11:16:43.736408  270029 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca-key.pem (1679 bytes)
	I0601 11:16:43.736433  270029 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca.pem (1082 bytes)
	I0601 11:16:43.736458  270029 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/cert.pem (1123 bytes)
	I0601 11:16:43.736488  270029 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/key.pem (1679 bytes)
	I0601 11:16:43.736535  270029 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files/etc/ssl/certs/67082.pem (1708 bytes)
	I0601 11:16:43.737038  270029 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/embed-certs-20220601110327-6708/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0601 11:16:43.753252  270029 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/embed-certs-20220601110327-6708/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0601 11:16:43.769071  270029 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/embed-certs-20220601110327-6708/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0601 11:16:43.785137  270029 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/embed-certs-20220601110327-6708/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0601 11:16:43.800815  270029 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0601 11:16:43.816567  270029 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0601 11:16:43.832435  270029 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0601 11:16:43.848147  270029 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0601 11:16:43.864438  270029 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0601 11:16:43.880361  270029 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/6708.pem --> /usr/share/ca-certificates/6708.pem (1338 bytes)
	I0601 11:16:43.896362  270029 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files/etc/ssl/certs/67082.pem --> /usr/share/ca-certificates/67082.pem (1708 bytes)
	I0601 11:16:43.912480  270029 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0601 11:16:43.924191  270029 ssh_runner.go:195] Run: openssl version
	I0601 11:16:43.928562  270029 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/67082.pem && ln -fs /usr/share/ca-certificates/67082.pem /etc/ssl/certs/67082.pem"
	I0601 11:16:43.935311  270029 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/67082.pem
	I0601 11:16:43.938057  270029 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Jun  1 10:24 /usr/share/ca-certificates/67082.pem
	I0601 11:16:43.938091  270029 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/67082.pem
	I0601 11:16:43.942508  270029 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/67082.pem /etc/ssl/certs/3ec20f2e.0"
	I0601 11:16:43.948891  270029 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0601 11:16:43.955605  270029 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0601 11:16:43.958385  270029 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Jun  1 10:20 /usr/share/ca-certificates/minikubeCA.pem
	I0601 11:16:43.958427  270029 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0601 11:16:43.962842  270029 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0601 11:16:43.969066  270029 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6708.pem && ln -fs /usr/share/ca-certificates/6708.pem /etc/ssl/certs/6708.pem"
	I0601 11:16:43.975850  270029 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6708.pem
	I0601 11:16:43.978786  270029 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Jun  1 10:24 /usr/share/ca-certificates/6708.pem
	I0601 11:16:43.978822  270029 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6708.pem
	I0601 11:16:43.983269  270029 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6708.pem /etc/ssl/certs/51391683.0"
	I0601 11:16:43.989455  270029 kubeadm.go:395] StartCluster: {Name:embed-certs-20220601110327-6708 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:embed-certs-20220601110327-6708 Namespace:default APIServerName
:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.5.1@sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledS
top:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0601 11:16:43.989553  270029 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0601 11:16:43.989584  270029 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0601 11:16:44.014119  270029 cri.go:87] found id: "f1db42bbd17faeafd4cd3be2854a922ebf474bf60a0d563922769dd22d828187"
	I0601 11:16:44.014147  270029 cri.go:87] found id: "4a8be0c7cfc5357e854ba7e1f8d7f90fdbacf08e294eb70b849c4501eb4b32b6"
	I0601 11:16:44.014155  270029 cri.go:87] found id: "d49ab0e8a34f4fc8f9cd1e7a8cba837be3548801379cde0d444aadf2a833b32a"
	I0601 11:16:44.014160  270029 cri.go:87] found id: "c32cb0a91408a09b3c11ff34a26cefeb792698bb4386b00d3bf469632b43a1d0"
	I0601 11:16:44.014169  270029 cri.go:87] found id: "a985029383eb2ce5944970221a3e1c9b33b522c0c97a0fb29c0b4c260cbf6a2f"
	I0601 11:16:44.014178  270029 cri.go:87] found id: "b8dd730d917c46f1998a84680d8f66c7fe1a671f92381a72b93c7e59652409f2"
	I0601 11:16:44.014195  270029 cri.go:87] found id: ""
	I0601 11:16:44.014231  270029 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0601 11:16:44.026017  270029 cri.go:114] JSON = null
	W0601 11:16:44.026068  270029 kubeadm.go:402] unpause failed: list paused: list returned 0 containers, but ps returned 6
	I0601 11:16:44.026121  270029 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0601 11:16:44.032599  270029 kubeadm.go:410] found existing configuration files, will attempt cluster restart
	I0601 11:16:44.032619  270029 kubeadm.go:626] restartCluster start
	I0601 11:16:44.032657  270029 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0601 11:16:44.038572  270029 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0601 11:16:44.039184  270029 kubeconfig.go:116] verify returned: extract IP: "embed-certs-20220601110327-6708" does not appear in /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig
	I0601 11:16:44.039521  270029 kubeconfig.go:127] "embed-certs-20220601110327-6708" context is missing from /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig - will repair!
	I0601 11:16:44.040098  270029 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig: {Name:mkb0e16236e54fbc8651999f1dd70854c53de7db Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0601 11:16:44.041394  270029 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0601 11:16:44.047555  270029 api_server.go:165] Checking apiserver status ...
	I0601 11:16:44.047587  270029 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 11:16:44.054922  270029 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 11:16:44.255283  270029 api_server.go:165] Checking apiserver status ...
	I0601 11:16:44.255367  270029 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 11:16:44.263875  270029 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 11:16:44.455148  270029 api_server.go:165] Checking apiserver status ...
	I0601 11:16:44.455218  270029 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 11:16:44.463550  270029 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 11:16:44.655853  270029 api_server.go:165] Checking apiserver status ...
	I0601 11:16:44.655952  270029 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 11:16:44.664417  270029 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 11:16:44.855542  270029 api_server.go:165] Checking apiserver status ...
	I0601 11:16:44.855598  270029 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 11:16:44.863960  270029 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 11:16:45.055126  270029 api_server.go:165] Checking apiserver status ...
	I0601 11:16:45.055211  270029 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 11:16:45.063480  270029 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 11:16:45.255826  270029 api_server.go:165] Checking apiserver status ...
	I0601 11:16:45.255924  270029 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 11:16:45.264353  270029 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 11:16:45.455654  270029 api_server.go:165] Checking apiserver status ...
	I0601 11:16:45.455728  270029 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 11:16:45.464072  270029 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 11:16:45.655400  270029 api_server.go:165] Checking apiserver status ...
	I0601 11:16:45.655474  270029 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 11:16:45.664018  270029 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 11:16:45.855135  270029 api_server.go:165] Checking apiserver status ...
	I0601 11:16:45.855220  270029 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 11:16:45.863919  270029 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 11:16:46.055139  270029 api_server.go:165] Checking apiserver status ...
	I0601 11:16:46.055234  270029 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 11:16:46.063984  270029 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 11:16:46.255234  270029 api_server.go:165] Checking apiserver status ...
	I0601 11:16:46.255309  270029 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 11:16:46.263465  270029 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 11:16:46.455752  270029 api_server.go:165] Checking apiserver status ...
	I0601 11:16:46.455834  270029 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 11:16:46.464271  270029 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 11:16:46.655553  270029 api_server.go:165] Checking apiserver status ...
	I0601 11:16:46.655615  270029 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 11:16:46.664388  270029 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 11:16:46.855579  270029 api_server.go:165] Checking apiserver status ...
	I0601 11:16:46.855653  270029 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 11:16:46.864130  270029 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 11:16:45.354907  254820 pod_ready.go:102] pod "coredns-5644d7b6d9-5z28m" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 10:59:44 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:16:47.355242  254820 pod_ready.go:102] pod "coredns-5644d7b6d9-5z28m" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 10:59:44 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:16:47.055676  270029 api_server.go:165] Checking apiserver status ...
	I0601 11:16:47.055754  270029 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 11:16:47.064444  270029 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 11:16:47.064468  270029 api_server.go:165] Checking apiserver status ...
	I0601 11:16:47.064499  270029 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 11:16:47.072080  270029 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 11:16:47.072108  270029 kubeadm.go:601] needs reconfigure: apiserver error: timed out waiting for the condition
	I0601 11:16:47.072115  270029 kubeadm.go:1092] stopping kube-system containers ...
	I0601 11:16:47.072127  270029 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I0601 11:16:47.072169  270029 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0601 11:16:47.097085  270029 cri.go:87] found id: "f1db42bbd17faeafd4cd3be2854a922ebf474bf60a0d563922769dd22d828187"
	I0601 11:16:47.097117  270029 cri.go:87] found id: "4a8be0c7cfc5357e854ba7e1f8d7f90fdbacf08e294eb70b849c4501eb4b32b6"
	I0601 11:16:47.097128  270029 cri.go:87] found id: "d49ab0e8a34f4fc8f9cd1e7a8cba837be3548801379cde0d444aadf2a833b32a"
	I0601 11:16:47.097138  270029 cri.go:87] found id: "c32cb0a91408a09b3c11ff34a26cefeb792698bb4386b00d3bf469632b43a1d0"
	I0601 11:16:47.097146  270029 cri.go:87] found id: "a985029383eb2ce5944970221a3e1c9b33b522c0c97a0fb29c0b4c260cbf6a2f"
	I0601 11:16:47.097156  270029 cri.go:87] found id: "b8dd730d917c46f1998a84680d8f66c7fe1a671f92381a72b93c7e59652409f2"
	I0601 11:16:47.097162  270029 cri.go:87] found id: ""
	I0601 11:16:47.097167  270029 cri.go:232] Stopping containers: [f1db42bbd17faeafd4cd3be2854a922ebf474bf60a0d563922769dd22d828187 4a8be0c7cfc5357e854ba7e1f8d7f90fdbacf08e294eb70b849c4501eb4b32b6 d49ab0e8a34f4fc8f9cd1e7a8cba837be3548801379cde0d444aadf2a833b32a c32cb0a91408a09b3c11ff34a26cefeb792698bb4386b00d3bf469632b43a1d0 a985029383eb2ce5944970221a3e1c9b33b522c0c97a0fb29c0b4c260cbf6a2f b8dd730d917c46f1998a84680d8f66c7fe1a671f92381a72b93c7e59652409f2]
	I0601 11:16:47.097217  270029 ssh_runner.go:195] Run: which crictl
	I0601 11:16:47.099999  270029 ssh_runner.go:195] Run: sudo /usr/bin/crictl stop f1db42bbd17faeafd4cd3be2854a922ebf474bf60a0d563922769dd22d828187 4a8be0c7cfc5357e854ba7e1f8d7f90fdbacf08e294eb70b849c4501eb4b32b6 d49ab0e8a34f4fc8f9cd1e7a8cba837be3548801379cde0d444aadf2a833b32a c32cb0a91408a09b3c11ff34a26cefeb792698bb4386b00d3bf469632b43a1d0 a985029383eb2ce5944970221a3e1c9b33b522c0c97a0fb29c0b4c260cbf6a2f b8dd730d917c46f1998a84680d8f66c7fe1a671f92381a72b93c7e59652409f2
	I0601 11:16:47.124540  270029 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0601 11:16:47.134618  270029 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0601 11:16:47.141742  270029 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5643 Jun  1 11:03 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5656 Jun  1 11:03 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2063 Jun  1 11:03 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5604 Jun  1 11:03 /etc/kubernetes/scheduler.conf
	
	I0601 11:16:47.141795  270029 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0601 11:16:47.148369  270029 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0601 11:16:47.154571  270029 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0601 11:16:47.160776  270029 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0601 11:16:47.160822  270029 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0601 11:16:47.166675  270029 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0601 11:16:47.172938  270029 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0601 11:16:47.172978  270029 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0601 11:16:47.179087  270029 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0601 11:16:47.185727  270029 kubeadm.go:703] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0601 11:16:47.185749  270029 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0601 11:16:47.228261  270029 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0601 11:16:48.197494  270029 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0601 11:16:48.329624  270029 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0601 11:16:48.378681  270029 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0601 11:16:48.420684  270029 api_server.go:51] waiting for apiserver process to appear ...
	I0601 11:16:48.420732  270029 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:16:48.929035  270029 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:16:49.428979  270029 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:16:49.928976  270029 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:16:50.428888  270029 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:16:50.928698  270029 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:16:51.428664  270029 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:16:51.929701  270029 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:16:49.355631  254820 pod_ready.go:102] pod "coredns-5644d7b6d9-5z28m" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 10:59:44 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:16:51.854986  254820 pod_ready.go:102] pod "coredns-5644d7b6d9-5z28m" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 10:59:44 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:16:52.429050  270029 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:16:52.928894  270029 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:16:53.429111  270029 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:16:53.929528  270029 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:16:54.429038  270029 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:16:54.463645  270029 api_server.go:71] duration metric: took 6.042967785s to wait for apiserver process to appear ...
	I0601 11:16:54.463674  270029 api_server.go:87] waiting for apiserver healthz status ...
	I0601 11:16:54.463686  270029 api_server.go:240] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0601 11:16:54.464059  270029 api_server.go:256] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0601 11:16:54.964315  270029 api_server.go:240] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0601 11:16:53.855517  254820 pod_ready.go:102] pod "coredns-5644d7b6d9-5z28m" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 10:59:44 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:16:56.355928  254820 pod_ready.go:102] pod "coredns-5644d7b6d9-5z28m" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 10:59:44 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:16:57.340901  270029 api_server.go:266] https://192.168.76.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0601 11:16:57.340932  270029 api_server.go:102] status: https://192.168.76.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0601 11:16:57.464200  270029 api_server.go:240] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0601 11:16:57.470124  270029 api_server.go:266] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0601 11:16:57.470161  270029 api_server.go:102] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0601 11:16:57.964628  270029 api_server.go:240] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0601 11:16:57.969079  270029 api_server.go:266] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0601 11:16:57.969109  270029 api_server.go:102] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0601 11:16:58.464413  270029 api_server.go:240] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0601 11:16:58.469280  270029 api_server.go:266] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0601 11:16:58.469323  270029 api_server.go:102] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0601 11:16:58.964873  270029 api_server.go:240] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0601 11:16:58.969629  270029 api_server.go:266] https://192.168.76.2:8443/healthz returned 200:
	ok
	I0601 11:16:58.976323  270029 api_server.go:140] control plane version: v1.23.6
	I0601 11:16:58.976349  270029 api_server.go:130] duration metric: took 4.512668885s to wait for apiserver health ...
	I0601 11:16:58.976362  270029 cni.go:95] Creating CNI manager for ""
	I0601 11:16:58.976370  270029 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0601 11:16:58.978490  270029 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0601 11:16:58.979893  270029 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0601 11:16:58.983633  270029 cni.go:189] applying CNI manifest using /var/lib/minikube/binaries/v1.23.6/kubectl ...
	I0601 11:16:58.983655  270029 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0601 11:16:58.996686  270029 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0601 11:16:59.594447  270029 system_pods.go:43] waiting for kube-system pods to appear ...
	I0601 11:16:59.601657  270029 system_pods.go:59] 9 kube-system pods found
	I0601 11:16:59.601692  270029 system_pods.go:61] "coredns-64897985d-9dpfv" [2fd986d2-2806-41d0-b75f-04a9f5883420] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
	I0601 11:16:59.601699  270029 system_pods.go:61] "etcd-embed-certs-20220601110327-6708" [696f91cd-2833-44cc-80cb-7cff571b5b35] Running
	I0601 11:16:59.601709  270029 system_pods.go:61] "kindnet-92tfl" [1e2e52a8-4f89-49af-9741-f79384628a29] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0601 11:16:59.601719  270029 system_pods.go:61] "kube-apiserver-embed-certs-20220601110327-6708" [a1b6d250-97ce-4261-983a-a43004795368] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0601 11:16:59.601741  270029 system_pods.go:61] "kube-controller-manager-embed-certs-20220601110327-6708" [2f9b6898-a046-4ff4-8a25-f38e0bfc8ebd] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0601 11:16:59.601766  270029 system_pods.go:61] "kube-proxy-99lsz" [c2f232c6-4807-4bcf-a1ca-c39489a0112a] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0601 11:16:59.601778  270029 system_pods.go:61] "kube-scheduler-embed-certs-20220601110327-6708" [846abe25-58d2-4c73-8fb2-bd8f7d4cd289] Running
	I0601 11:16:59.601786  270029 system_pods.go:61] "metrics-server-b955d9d8-c4kht" [b1221545-5b1f-4fd0-9d91-732fae262566] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
	I0601 11:16:59.601813  270029 system_pods.go:61] "storage-provisioner" [8d62c4a6-0f6f-4855-adc3-3347614c0287] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
	I0601 11:16:59.601825  270029 system_pods.go:74] duration metric: took 7.351583ms to wait for pod list to return data ...
	I0601 11:16:59.601839  270029 node_conditions.go:102] verifying NodePressure condition ...
	I0601 11:16:59.604272  270029 node_conditions.go:122] node storage ephemeral capacity is 304695084Ki
	I0601 11:16:59.604298  270029 node_conditions.go:123] node cpu capacity is 8
	I0601 11:16:59.604311  270029 node_conditions.go:105] duration metric: took 2.462157ms to run NodePressure ...
	I0601 11:16:59.604330  270029 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0601 11:16:59.726966  270029 kubeadm.go:762] waiting for restarted kubelet to initialise ...
	I0601 11:16:59.731041  270029 kubeadm.go:777] kubelet initialised
	I0601 11:16:59.731062  270029 kubeadm.go:778] duration metric: took 4.07535ms waiting for restarted kubelet to initialise ...
	I0601 11:16:59.731070  270029 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0601 11:16:59.737745  270029 pod_ready.go:78] waiting up to 4m0s for pod "coredns-64897985d-9dpfv" in "kube-system" namespace to be "Ready" ...
	I0601 11:17:01.743720  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:16:58.855967  254820 pod_ready.go:102] pod "coredns-5644d7b6d9-5z28m" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 10:59:44 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:17:01.355221  254820 pod_ready.go:102] pod "coredns-5644d7b6d9-5z28m" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 10:59:44 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:17:03.356805  254820 pod_ready.go:102] pod "coredns-5644d7b6d9-5z28m" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 10:59:44 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:17:04.243101  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:17:06.743027  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:17:05.855005  254820 pod_ready.go:102] pod "coredns-5644d7b6d9-5z28m" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 10:59:44 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:17:07.855118  254820 pod_ready.go:102] pod "coredns-5644d7b6d9-5z28m" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 10:59:44 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:17:09.243031  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:17:11.742744  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:17:09.855246  254820 pod_ready.go:102] pod "coredns-5644d7b6d9-5z28m" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 10:59:44 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:17:11.855357  254820 pod_ready.go:102] pod "coredns-5644d7b6d9-5z28m" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 10:59:44 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:17:13.743254  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:17:16.242930  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:17:14.355967  254820 pod_ready.go:102] pod "coredns-5644d7b6d9-5z28m" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 10:59:44 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:17:16.855640  254820 pod_ready.go:102] pod "coredns-5644d7b6d9-5z28m" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 10:59:44 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:17:19.355251  254820 pod_ready.go:102] pod "coredns-5644d7b6d9-5z28m" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 10:59:44 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:17:20.353024  254820 pod_ready.go:81] duration metric: took 4m0.003317239s waiting for pod "coredns-5644d7b6d9-5z28m" in "kube-system" namespace to be "Ready" ...
	E0601 11:17:20.353048  254820 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "coredns-5644d7b6d9-5z28m" in "kube-system" namespace to be "Ready" (will not retry!)
	I0601 11:17:20.353067  254820 pod_ready.go:38] duration metric: took 4m0.008046261s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0601 11:17:20.353090  254820 kubeadm.go:630] restartCluster took 5m9.726790355s
	W0601 11:17:20.353201  254820 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0601 11:17:20.353229  254820 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I0601 11:17:21.540348  254820 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force": (1.18709622s)
	I0601 11:17:21.540403  254820 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0601 11:17:21.550073  254820 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0601 11:17:21.557225  254820 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0601 11:17:21.557279  254820 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0601 11:17:21.564483  254820 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0601 11:17:21.564542  254820 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0601 11:17:18.243905  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:17:20.743239  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:17:21.918912  254820 out.go:204]   - Generating certificates and keys ...
	I0601 11:17:22.738748  254820 out.go:204]   - Booting up control plane ...
	I0601 11:17:23.242797  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:17:25.244019  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:17:27.743633  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:17:30.243041  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:17:31.781643  254820 out.go:204]   - Configuring RBAC rules ...
	I0601 11:17:32.197904  254820 cni.go:95] Creating CNI manager for ""
	I0601 11:17:32.197928  254820 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0601 11:17:32.199768  254820 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0601 11:17:32.201271  254820 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0601 11:17:32.204901  254820 cni.go:189] applying CNI manifest using /var/lib/minikube/binaries/v1.16.0/kubectl ...
	I0601 11:17:32.204927  254820 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0601 11:17:32.218979  254820 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0601 11:17:32.428375  254820 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0601 11:17:32.428491  254820 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:17:32.428492  254820 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl label nodes minikube.k8s.io/version=v1.26.0-beta.1 minikube.k8s.io/commit=4a356b2b7b41c6be3e1e342298908c27bb98ce92 minikube.k8s.io/name=old-k8s-version-20220601105850-6708 minikube.k8s.io/updated_at=2022_06_01T11_17_32_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:17:32.435199  254820 ops.go:34] apiserver oom_adj: -16
	I0601 11:17:32.502431  254820 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:17:33.111963  254820 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:17:33.611972  254820 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:17:32.243074  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:17:34.742805  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:17:36.743017  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:17:34.111941  254820 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:17:34.612095  254820 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:17:35.112547  254820 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:17:35.612182  254820 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:17:36.111959  254820 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:17:36.612404  254820 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:17:37.111711  254820 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:17:37.612424  254820 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:17:38.112268  254820 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:17:38.612434  254820 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:17:38.743858  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:17:41.242806  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:17:39.112426  254820 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:17:39.612518  254820 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:17:40.111917  254820 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:17:40.612600  254820 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:17:41.112173  254820 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:17:41.611972  254820 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:17:42.112578  254820 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:17:42.611825  254820 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:17:43.111943  254820 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:17:43.611988  254820 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:17:43.243005  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:17:45.742725  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:17:44.111909  254820 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:17:44.612461  254820 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:17:45.111634  254820 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:17:45.611836  254820 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:17:46.111993  254820 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:17:46.612464  254820 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:17:47.112043  254820 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:17:47.179274  254820 kubeadm.go:1045] duration metric: took 14.750830236s to wait for elevateKubeSystemPrivileges.
	I0601 11:17:47.179303  254820 kubeadm.go:397] StartCluster complete in 5m36.59748449s
	I0601 11:17:47.179319  254820 settings.go:142] acquiring lock: {Name:mk20a847233fa50399ab0a24280bffb8d8dbd41b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0601 11:17:47.179406  254820 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig
	I0601 11:17:47.180983  254820 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig: {Name:mkb0e16236e54fbc8651999f1dd70854c53de7db Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0601 11:17:47.695922  254820 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "old-k8s-version-20220601105850-6708" rescaled to 1
	I0601 11:17:47.695995  254820 start.go:208] Will wait 6m0s for node &{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0601 11:17:47.699321  254820 out.go:177] * Verifying Kubernetes components...
	I0601 11:17:47.696036  254820 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0601 11:17:47.696052  254820 addons.go:415] enableAddons start: toEnable=map[dashboard:true metrics-server:true], additional=[]
	I0601 11:17:47.696246  254820 config.go:178] Loaded profile config "old-k8s-version-20220601105850-6708": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.16.0
	I0601 11:17:47.700668  254820 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0601 11:17:47.700702  254820 addons.go:65] Setting storage-provisioner=true in profile "old-k8s-version-20220601105850-6708"
	I0601 11:17:47.700714  254820 addons.go:65] Setting metrics-server=true in profile "old-k8s-version-20220601105850-6708"
	I0601 11:17:47.700720  254820 addons.go:65] Setting dashboard=true in profile "old-k8s-version-20220601105850-6708"
	I0601 11:17:47.700726  254820 addons.go:153] Setting addon storage-provisioner=true in "old-k8s-version-20220601105850-6708"
	I0601 11:17:47.700729  254820 addons.go:153] Setting addon metrics-server=true in "old-k8s-version-20220601105850-6708"
	I0601 11:17:47.700730  254820 addons.go:153] Setting addon dashboard=true in "old-k8s-version-20220601105850-6708"
	W0601 11:17:47.700732  254820 addons.go:165] addon storage-provisioner should already be in state true
	W0601 11:17:47.700739  254820 addons.go:165] addon dashboard should already be in state true
	W0601 11:17:47.700738  254820 addons.go:165] addon metrics-server should already be in state true
	I0601 11:17:47.700777  254820 host.go:66] Checking if "old-k8s-version-20220601105850-6708" exists ...
	I0601 11:17:47.700784  254820 host.go:66] Checking if "old-k8s-version-20220601105850-6708" exists ...
	I0601 11:17:47.700790  254820 host.go:66] Checking if "old-k8s-version-20220601105850-6708" exists ...
	I0601 11:17:47.700709  254820 addons.go:65] Setting default-storageclass=true in profile "old-k8s-version-20220601105850-6708"
	I0601 11:17:47.700824  254820 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-20220601105850-6708"
	I0601 11:17:47.701173  254820 cli_runner.go:164] Run: docker container inspect old-k8s-version-20220601105850-6708 --format={{.State.Status}}
	I0601 11:17:47.701259  254820 cli_runner.go:164] Run: docker container inspect old-k8s-version-20220601105850-6708 --format={{.State.Status}}
	I0601 11:17:47.701279  254820 cli_runner.go:164] Run: docker container inspect old-k8s-version-20220601105850-6708 --format={{.State.Status}}
	I0601 11:17:47.701285  254820 cli_runner.go:164] Run: docker container inspect old-k8s-version-20220601105850-6708 --format={{.State.Status}}
	I0601 11:17:47.713330  254820 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-20220601105850-6708" to be "Ready" ...
	I0601 11:17:47.750554  254820 addons.go:153] Setting addon default-storageclass=true in "old-k8s-version-20220601105850-6708"
	W0601 11:17:47.750578  254820 addons.go:165] addon default-storageclass should already be in state true
	I0601 11:17:47.750614  254820 host.go:66] Checking if "old-k8s-version-20220601105850-6708" exists ...
	I0601 11:17:47.754763  254820 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0601 11:17:47.750975  254820 cli_runner.go:164] Run: docker container inspect old-k8s-version-20220601105850-6708 --format={{.State.Status}}
	I0601 11:17:47.758252  254820 out.go:177]   - Using image k8s.gcr.io/echoserver:1.4
	I0601 11:17:47.757064  254820 addons.go:348] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0601 11:17:47.758285  254820 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0601 11:17:47.758359  254820 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220601105850-6708
	I0601 11:17:47.759985  254820 out.go:177]   - Using image kubernetesui/dashboard:v2.5.1
	I0601 11:17:47.761242  254820 addons.go:348] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0601 11:17:47.761281  254820 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0601 11:17:47.761332  254820 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220601105850-6708
	I0601 11:17:47.764485  254820 out.go:177]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I0601 11:17:47.765882  254820 addons.go:348] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0601 11:17:47.765902  254820 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0601 11:17:47.765947  254820 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220601105850-6708
	I0601 11:17:47.809745  254820 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.58.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0601 11:17:47.813277  254820 addons.go:348] installing /etc/kubernetes/addons/storageclass.yaml
	I0601 11:17:47.813299  254820 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0601 11:17:47.813350  254820 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220601105850-6708
	I0601 11:17:47.814249  254820 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49422 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/old-k8s-version-20220601105850-6708/id_rsa Username:docker}
	I0601 11:17:47.830337  254820 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49422 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/old-k8s-version-20220601105850-6708/id_rsa Username:docker}
	I0601 11:17:47.832848  254820 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49422 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/old-k8s-version-20220601105850-6708/id_rsa Username:docker}
	I0601 11:17:47.854014  254820 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49422 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/old-k8s-version-20220601105850-6708/id_rsa Username:docker}
	I0601 11:17:47.968702  254820 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0601 11:17:47.969398  254820 addons.go:348] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0601 11:17:47.969416  254820 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1849 bytes)
	I0601 11:17:47.969640  254820 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0601 11:17:47.970189  254820 addons.go:348] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0601 11:17:47.970208  254820 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0601 11:17:48.055776  254820 addons.go:348] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0601 11:17:48.055807  254820 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0601 11:17:48.057898  254820 addons.go:348] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0601 11:17:48.057918  254820 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0601 11:17:48.072675  254820 addons.go:348] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0601 11:17:48.072702  254820 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0601 11:17:48.072788  254820 addons.go:348] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0601 11:17:48.072805  254820 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0601 11:17:48.154620  254820 addons.go:348] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0601 11:17:48.154657  254820 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4196 bytes)
	I0601 11:17:48.158011  254820 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0601 11:17:48.181023  254820 addons.go:348] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0601 11:17:48.181054  254820 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0601 11:17:48.271474  254820 addons.go:348] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0601 11:17:48.271501  254820 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0601 11:17:48.275017  254820 start.go:806] {"host.minikube.internal": 192.168.58.1} host record injected into CoreDNS
	I0601 11:17:48.360863  254820 addons.go:348] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0601 11:17:48.360891  254820 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0601 11:17:48.376549  254820 addons.go:348] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0601 11:17:48.376580  254820 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0601 11:17:48.392167  254820 addons.go:348] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0601 11:17:48.392196  254820 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0601 11:17:48.464169  254820 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0601 11:17:48.882430  254820 addons.go:386] Verifying addon metrics-server=true in "old-k8s-version-20220601105850-6708"
	I0601 11:17:49.264813  254820 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server, dashboard
	I0601 11:17:47.745681  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:17:50.242914  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:17:49.266465  254820 addons.go:417] enableAddons completed in 1.57041232s
	I0601 11:17:49.718664  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:17:51.719973  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:17:52.742723  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:17:54.742788  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:17:56.742912  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:17:54.219149  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:17:56.719562  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:17:58.743796  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:18:01.242721  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:17:59.218935  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:18:01.719652  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:18:03.243620  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:18:05.742777  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:18:04.219204  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:18:06.719494  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:18:07.742900  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:18:09.743030  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:18:09.219176  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:18:11.718880  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:18:13.719312  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:18:12.242806  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:18:14.243041  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:18:16.742612  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:18:15.719670  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:18:18.219172  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:18:18.742966  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:18:21.243088  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:18:20.219521  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:18:22.719227  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:18:23.245196  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:18:25.742790  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:18:24.719365  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:18:26.719411  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:18:27.743212  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:18:30.243627  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:18:29.218801  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:18:31.219603  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:18:33.719821  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:18:32.743319  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:18:35.242980  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:18:36.219334  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:18:38.219629  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:18:37.243134  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:18:39.742862  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:18:40.219897  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:18:42.719206  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:18:42.242887  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:18:44.243121  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:18:46.742692  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:18:44.719361  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:18:46.719965  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:18:48.742793  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:18:51.243730  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:18:49.219161  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:18:51.719823  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:18:53.742610  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:18:55.742817  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:18:54.219442  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:18:56.719307  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:18:57.742887  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:19:00.244895  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:18:59.218862  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:19:01.219115  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:19:03.219470  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:19:02.743210  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:19:05.242775  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:19:05.719920  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:19:08.219261  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:19:07.243536  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:19:09.743457  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:19:11.743691  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:19:10.719799  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:19:13.219313  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:19:13.743775  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:19:16.242793  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:19:15.220539  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:19:17.719072  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:19:18.243014  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:19:20.742913  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:19:19.719157  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:19:22.219444  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:19:22.743021  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:19:24.743212  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:19:24.718931  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:19:26.719409  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:19:28.719822  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:19:27.243432  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:19:29.743172  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:19:31.219776  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:19:33.719660  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:19:32.242892  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:19:34.242952  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:19:36.742808  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:19:36.219188  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:19:38.219280  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID
	fec5fcb4aee4a       6de166512aa22       25 seconds ago      Exited              kindnet-cni               7                   65b8c60551ae4
	313035e9674ff       4c03754524064       12 minutes ago      Running             kube-proxy                0                   c6ff76a6b51bf
	f9746f111b56a       8fa62c12256df       12 minutes ago      Running             kube-apiserver            0                   9e938dc1f669a
	0b15aeee4f551       595f327f224a4       12 minutes ago      Running             kube-scheduler            0                   1fa00271568ab
	627fd5c08820c       df7b72818ad2e       12 minutes ago      Running             kube-controller-manager   0                   a871ea5dc3032
	6ce85ae821e03       25f8c7f3da61c       12 minutes ago      Running             etcd                      0                   73e15160f8342
	
	* 
	* ==> containerd <==
	* -- Logs begin at Wed 2022-06-01 11:07:03 UTC, end at Wed 2022-06-01 11:19:40 UTC. --
	Jun 01 11:11:21 default-k8s-different-port-20220601110654-6708 containerd[516]: time="2022-06-01T11:11:21.984096913Z" level=warning msg="cleaning up after shim disconnected" id=783ef41102cfd7ad4ba6d335d930063b2d735fbe8ff6d9b435ff65af1cc658e2 namespace=k8s.io
	Jun 01 11:11:21 default-k8s-different-port-20220601110654-6708 containerd[516]: time="2022-06-01T11:11:21.984110739Z" level=info msg="cleaning up dead shim"
	Jun 01 11:11:21 default-k8s-different-port-20220601110654-6708 containerd[516]: time="2022-06-01T11:11:21.993468203Z" level=warning msg="cleanup warnings time=\"2022-06-01T11:11:21Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2449 runtime=io.containerd.runc.v2\n"
	Jun 01 11:11:22 default-k8s-different-port-20220601110654-6708 containerd[516]: time="2022-06-01T11:11:22.095149153Z" level=info msg="RemoveContainer for \"8af1e71b79f5c05909f7b5ac1a7551901a085d87a5e5ee09971994295cb2b2c9\""
	Jun 01 11:11:22 default-k8s-different-port-20220601110654-6708 containerd[516]: time="2022-06-01T11:11:22.099371016Z" level=info msg="RemoveContainer for \"8af1e71b79f5c05909f7b5ac1a7551901a085d87a5e5ee09971994295cb2b2c9\" returns successfully"
	Jun 01 11:14:02 default-k8s-different-port-20220601110654-6708 containerd[516]: time="2022-06-01T11:14:02.586772350Z" level=info msg="CreateContainer within sandbox \"65b8c60551ae491626460bc8b42f164144cfeb7dea5063c8082b526389027897\" for container &ContainerMetadata{Name:kindnet-cni,Attempt:6,}"
	Jun 01 11:14:02 default-k8s-different-port-20220601110654-6708 containerd[516]: time="2022-06-01T11:14:02.598817303Z" level=info msg="CreateContainer within sandbox \"65b8c60551ae491626460bc8b42f164144cfeb7dea5063c8082b526389027897\" for &ContainerMetadata{Name:kindnet-cni,Attempt:6,} returns container id \"937c5d1645be40fa7ab5a0a0f9569191312b4b62cad550f4d1269339e22ade76\""
	Jun 01 11:14:02 default-k8s-different-port-20220601110654-6708 containerd[516]: time="2022-06-01T11:14:02.599241470Z" level=info msg="StartContainer for \"937c5d1645be40fa7ab5a0a0f9569191312b4b62cad550f4d1269339e22ade76\""
	Jun 01 11:14:02 default-k8s-different-port-20220601110654-6708 containerd[516]: time="2022-06-01T11:14:02.668660041Z" level=info msg="StartContainer for \"937c5d1645be40fa7ab5a0a0f9569191312b4b62cad550f4d1269339e22ade76\" returns successfully"
	Jun 01 11:14:12 default-k8s-different-port-20220601110654-6708 containerd[516]: time="2022-06-01T11:14:12.893305963Z" level=info msg="shim disconnected" id=937c5d1645be40fa7ab5a0a0f9569191312b4b62cad550f4d1269339e22ade76
	Jun 01 11:14:12 default-k8s-different-port-20220601110654-6708 containerd[516]: time="2022-06-01T11:14:12.893373288Z" level=warning msg="cleaning up after shim disconnected" id=937c5d1645be40fa7ab5a0a0f9569191312b4b62cad550f4d1269339e22ade76 namespace=k8s.io
	Jun 01 11:14:12 default-k8s-different-port-20220601110654-6708 containerd[516]: time="2022-06-01T11:14:12.893386248Z" level=info msg="cleaning up dead shim"
	Jun 01 11:14:12 default-k8s-different-port-20220601110654-6708 containerd[516]: time="2022-06-01T11:14:12.902625641Z" level=warning msg="cleanup warnings time=\"2022-06-01T11:14:12Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2779 runtime=io.containerd.runc.v2\n"
	Jun 01 11:14:13 default-k8s-different-port-20220601110654-6708 containerd[516]: time="2022-06-01T11:14:13.400464046Z" level=info msg="RemoveContainer for \"783ef41102cfd7ad4ba6d335d930063b2d735fbe8ff6d9b435ff65af1cc658e2\""
	Jun 01 11:14:13 default-k8s-different-port-20220601110654-6708 containerd[516]: time="2022-06-01T11:14:13.405029474Z" level=info msg="RemoveContainer for \"783ef41102cfd7ad4ba6d335d930063b2d735fbe8ff6d9b435ff65af1cc658e2\" returns successfully"
	Jun 01 11:19:15 default-k8s-different-port-20220601110654-6708 containerd[516]: time="2022-06-01T11:19:15.586562484Z" level=info msg="CreateContainer within sandbox \"65b8c60551ae491626460bc8b42f164144cfeb7dea5063c8082b526389027897\" for container &ContainerMetadata{Name:kindnet-cni,Attempt:7,}"
	Jun 01 11:19:15 default-k8s-different-port-20220601110654-6708 containerd[516]: time="2022-06-01T11:19:15.599083872Z" level=info msg="CreateContainer within sandbox \"65b8c60551ae491626460bc8b42f164144cfeb7dea5063c8082b526389027897\" for &ContainerMetadata{Name:kindnet-cni,Attempt:7,} returns container id \"fec5fcb4aee4a111d94d448605edbdddd36bb33e1c2c06fad87be11f5157efdd\""
	Jun 01 11:19:15 default-k8s-different-port-20220601110654-6708 containerd[516]: time="2022-06-01T11:19:15.599621933Z" level=info msg="StartContainer for \"fec5fcb4aee4a111d94d448605edbdddd36bb33e1c2c06fad87be11f5157efdd\""
	Jun 01 11:19:15 default-k8s-different-port-20220601110654-6708 containerd[516]: time="2022-06-01T11:19:15.757658411Z" level=info msg="StartContainer for \"fec5fcb4aee4a111d94d448605edbdddd36bb33e1c2c06fad87be11f5157efdd\" returns successfully"
	Jun 01 11:19:25 default-k8s-different-port-20220601110654-6708 containerd[516]: time="2022-06-01T11:19:25.983536573Z" level=info msg="shim disconnected" id=fec5fcb4aee4a111d94d448605edbdddd36bb33e1c2c06fad87be11f5157efdd
	Jun 01 11:19:25 default-k8s-different-port-20220601110654-6708 containerd[516]: time="2022-06-01T11:19:25.983603412Z" level=warning msg="cleaning up after shim disconnected" id=fec5fcb4aee4a111d94d448605edbdddd36bb33e1c2c06fad87be11f5157efdd namespace=k8s.io
	Jun 01 11:19:25 default-k8s-different-port-20220601110654-6708 containerd[516]: time="2022-06-01T11:19:25.983617053Z" level=info msg="cleaning up dead shim"
	Jun 01 11:19:25 default-k8s-different-port-20220601110654-6708 containerd[516]: time="2022-06-01T11:19:25.992675415Z" level=warning msg="cleanup warnings time=\"2022-06-01T11:19:25Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2882 runtime=io.containerd.runc.v2\n"
	Jun 01 11:19:26 default-k8s-different-port-20220601110654-6708 containerd[516]: time="2022-06-01T11:19:26.564844435Z" level=info msg="RemoveContainer for \"937c5d1645be40fa7ab5a0a0f9569191312b4b62cad550f4d1269339e22ade76\""
	Jun 01 11:19:26 default-k8s-different-port-20220601110654-6708 containerd[516]: time="2022-06-01T11:19:26.568945671Z" level=info msg="RemoveContainer for \"937c5d1645be40fa7ab5a0a0f9569191312b4b62cad550f4d1269339e22ade76\" returns successfully"
	
	* 
	* ==> describe nodes <==
	* Name:               default-k8s-different-port-20220601110654-6708
	Roles:              control-plane,master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-different-port-20220601110654-6708
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=4a356b2b7b41c6be3e1e342298908c27bb98ce92
	                    minikube.k8s.io/name=default-k8s-different-port-20220601110654-6708
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2022_06_01T11_07_22_0700
	                    minikube.k8s.io/version=v1.26.0-beta.1
	                    node-role.kubernetes.io/control-plane=
	                    node-role.kubernetes.io/master=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 01 Jun 2022 11:07:18 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-different-port-20220601110654-6708
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 01 Jun 2022 11:19:39 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 01 Jun 2022 11:17:49 +0000   Wed, 01 Jun 2022 11:07:16 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 01 Jun 2022 11:17:49 +0000   Wed, 01 Jun 2022 11:07:16 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 01 Jun 2022 11:17:49 +0000   Wed, 01 Jun 2022 11:07:16 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Wed, 01 Jun 2022 11:17:49 +0000   Wed, 01 Jun 2022 11:07:16 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    default-k8s-different-port-20220601110654-6708
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304695084Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32873824Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304695084Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32873824Ki
	  pods:               110
	System Info:
	  Machine ID:                 e0d7477b601740b2a7c32c13851e505c
	  System UUID:                c3073178-0849-48bb-88da-ba72ab8c4ba0
	  Boot ID:                    6ec46557-2d76-4d83-8353-3ee04fd961c4
	  Kernel Version:             5.13.0-1027-gcp
	  OS Image:                   Ubuntu 20.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.6.4
	  Kubelet Version:            v1.23.6
	  Kube-Proxy Version:         v1.23.6
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                                                      CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                                      ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-default-k8s-different-port-20220601110654-6708                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (0%!)(MISSING)       0 (0%!)(MISSING)         12m
	  kube-system                 kindnet-7fspq                                                             100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      12m
	  kube-system                 kube-apiserver-default-k8s-different-port-20220601110654-6708             250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-controller-manager-default-k8s-different-port-20220601110654-6708    200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-proxy-slzcl                                                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-scheduler-default-k8s-different-port-20220601110654-6708             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%!)(MISSING)   100m (1%!)(MISSING)
	  memory             150Mi (0%!)(MISSING)  50Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From        Message
	  ----    ------                   ----  ----        -------
	  Normal  Starting                 12m   kube-proxy  
	  Normal  Starting                 12m   kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  12m   kubelet     Node default-k8s-different-port-20220601110654-6708 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12m   kubelet     Node default-k8s-different-port-20220601110654-6708 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12m   kubelet     Node default-k8s-different-port-20220601110654-6708 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  12m   kubelet     Updated Node Allocatable limit across pods
	
	* 
	* ==> dmesg <==
	* [Jun 1 10:57] IPv4: martian source 10.244.0.2 from 10.244.0.2, on dev bridge
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff ce 23 50 3b c9 fd 08 06
	[  +0.595787] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ce 23 50 3b c9 fd 08 06
	[  +0.586201] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 66 8a d2 ee 2a 9a 08 06
	[Jun 1 10:58] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 9a c7 83 3b 8c 1d 08 06
	[  +0.000302] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 66 8a d2 ee 2a 9a 08 06
	[ +12.725641] IPv4: martian source 10.244.0.2 from 10.244.0.2, on dev bridge
	[  +0.000013] ll header: 00000000: ff ff ff ff ff ff 3a 82 da d0 8c 8d 08 06
	[  +0.906380] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 3a 82 da d0 8c 8d 08 06
	[  +0.302806] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff aa d1 05 1a ef 66 08 06
	[ +13.606547] IPv4: martian source 10.85.0.2 from 10.85.0.2, on dev cni0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 7e ad f3 21 27 5b 08 06
	[  +8.626375] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff f6 c1 ef be 01 dd 08 06
	[  +0.000348] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff aa d1 05 1a ef 66 08 06
	[  +8.278909] IPv4: martian source 10.85.0.2 from 10.85.0.2, on dev cni0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 72 fb cd eb 09 11 08 06
	[Jun 1 11:01] IPv4: martian destination 127.0.0.11 from 10.244.0.2, dev vethb04b9442
	
	* 
	* ==> etcd [6ce85ae821e03fdd8bd07541eda8ea822ee62a21dc41c68bec58c5695d43fb44] <==
	* {"level":"info","ts":"2022-06-01T11:07:16.174Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 1"}
	{"level":"info","ts":"2022-06-01T11:07:16.175Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 1"}
	{"level":"info","ts":"2022-06-01T11:07:16.175Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 1"}
	{"level":"info","ts":"2022-06-01T11:07:16.175Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 2"}
	{"level":"info","ts":"2022-06-01T11:07:16.175Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2022-06-01T11:07:16.175Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 2"}
	{"level":"info","ts":"2022-06-01T11:07:16.175Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"}
	{"level":"info","ts":"2022-06-01T11:07:16.175Z","caller":"etcdserver/server.go:2476","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2022-06-01T11:07:16.176Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2022-06-01T11:07:16.176Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2022-06-01T11:07:16.176Z","caller":"etcdserver/server.go:2500","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2022-06-01T11:07:16.176Z","caller":"etcdserver/server.go:2027","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:default-k8s-different-port-20220601110654-6708 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2022-06-01T11:07:16.176Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-06-01T11:07:16.176Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-06-01T11:07:16.177Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2022-06-01T11:07:16.178Z","caller":"etcdmain/main.go:47","msg":"notifying init daemon"}
	{"level":"info","ts":"2022-06-01T11:07:16.178Z","caller":"etcdmain/main.go:53","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2022-06-01T11:07:16.177Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.49.2:2379"}
	{"level":"warn","ts":"2022-06-01T11:14:25.280Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"275.701314ms","expected-duration":"100ms","prefix":"","request":"header:<ID:8128013397418876628 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/leases/kube-node-lease/default-k8s-different-port-20220601110654-6708\" mod_revision:621 > success:<request_put:<key:\"/registry/leases/kube-node-lease/default-k8s-different-port-20220601110654-6708\" value_size:588 >> failure:<request_range:<key:\"/registry/leases/kube-node-lease/default-k8s-different-port-20220601110654-6708\" > >>","response":"size:16"}
	{"level":"info","ts":"2022-06-01T11:14:25.280Z","caller":"traceutil/trace.go:171","msg":"trace[249815246] linearizableReadLoop","detail":"{readStateIndex:724; appliedIndex:723; }","duration":"194.762209ms","start":"2022-06-01T11:14:25.085Z","end":"2022-06-01T11:14:25.280Z","steps":["trace[249815246] 'read index received'  (duration: 13.948823ms)","trace[249815246] 'applied index is now lower than readState.Index'  (duration: 180.811748ms)"],"step_count":2}
	{"level":"info","ts":"2022-06-01T11:14:25.280Z","caller":"traceutil/trace.go:171","msg":"trace[1228048321] transaction","detail":"{read_only:false; response_revision:623; number_of_response:1; }","duration":"289.883037ms","start":"2022-06-01T11:14:24.990Z","end":"2022-06-01T11:14:25.280Z","steps":["trace[1228048321] 'process raft request'  (duration: 13.589499ms)","trace[1228048321] 'compare'  (duration: 275.590014ms)"],"step_count":2}
	{"level":"warn","ts":"2022-06-01T11:14:25.280Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"194.895257ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2022-06-01T11:14:25.280Z","caller":"traceutil/trace.go:171","msg":"trace[64683920] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:623; }","duration":"194.930063ms","start":"2022-06-01T11:14:25.085Z","end":"2022-06-01T11:14:25.280Z","steps":["trace[64683920] 'agreement among raft nodes before linearized reading'  (duration: 194.877912ms)"],"step_count":1}
	{"level":"info","ts":"2022-06-01T11:17:16.484Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":585}
	{"level":"info","ts":"2022-06-01T11:17:16.485Z","caller":"mvcc/kvstore_compaction.go:57","msg":"finished scheduled compaction","compact-revision":585,"took":"542.037µs"}
	
	* 
	* ==> kernel <==
	*  11:19:40 up  1:02,  0 users,  load average: 0.43, 1.97, 2.06
	Linux default-k8s-different-port-20220601110654-6708 5.13.0-1027-gcp #32~20.04.1-Ubuntu SMP Thu May 26 10:53:08 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.4 LTS"
	
	* 
	* ==> kube-apiserver [f9746f111b56ac0244039e7b095ebf29999ccb20c1bf5e1ba484273bdb9e0d90] <==
	* I0601 11:07:18.453282       1 shared_informer.go:247] Caches are synced for crd-autoregister 
	I0601 11:07:18.453289       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0601 11:07:18.453299       1 cache.go:39] Caches are synced for autoregister controller
	I0601 11:07:18.453427       1 shared_informer.go:247] Caches are synced for cluster_authentication_trust_controller 
	I0601 11:07:18.454162       1 apf_controller.go:322] Running API Priority and Fairness config worker
	I0601 11:07:18.464230       1 controller.go:611] quota admission added evaluator for: namespaces
	I0601 11:07:19.313010       1 controller.go:132] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I0601 11:07:19.313033       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0601 11:07:19.318632       1 storage_scheduling.go:93] created PriorityClass system-node-critical with value 2000001000
	I0601 11:07:19.321753       1 storage_scheduling.go:93] created PriorityClass system-cluster-critical with value 2000000000
	I0601 11:07:19.321788       1 storage_scheduling.go:109] all system priority classes are created successfully or already exist.
	I0601 11:07:19.672421       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0601 11:07:19.701304       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0601 11:07:19.786756       1 alloc.go:329] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1]
	W0601 11:07:19.792151       1 lease.go:233] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I0601 11:07:19.793209       1 controller.go:611] quota admission added evaluator for: endpoints
	I0601 11:07:19.796644       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0601 11:07:20.164772       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0601 11:07:20.480504       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0601 11:07:21.468664       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0601 11:07:21.475420       1 alloc.go:329] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I0601 11:07:21.484951       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0601 11:07:33.885430       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	I0601 11:07:34.285929       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	I0601 11:07:34.903429       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	
	* 
	* ==> kube-controller-manager [627fd5c08820c1666f69a50ff3cc02a6eee73709048b46a0243143bf89dde787] <==
	* I0601 11:07:33.334022       1 node_lifecycle_controller.go:1163] Controller detected that all Nodes are not-Ready. Entering master disruption mode.
	I0601 11:07:33.334139       1 event.go:294] "Event occurred" object="default-k8s-different-port-20220601110654-6708" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node default-k8s-different-port-20220601110654-6708 event: Registered Node default-k8s-different-port-20220601110654-6708 in Controller"
	I0601 11:07:33.340437       1 shared_informer.go:247] Caches are synced for certificate-csrsigning-kubelet-client 
	I0601 11:07:33.340465       1 shared_informer.go:247] Caches are synced for certificate-csrsigning-kubelet-serving 
	I0601 11:07:33.342708       1 shared_informer.go:247] Caches are synced for namespace 
	I0601 11:07:33.370084       1 shared_informer.go:247] Caches are synced for service account 
	I0601 11:07:33.410234       1 shared_informer.go:247] Caches are synced for expand 
	I0601 11:07:33.416497       1 shared_informer.go:247] Caches are synced for ephemeral 
	I0601 11:07:33.464111       1 shared_informer.go:247] Caches are synced for persistent volume 
	I0601 11:07:33.474301       1 shared_informer.go:247] Caches are synced for PVC protection 
	I0601 11:07:33.482924       1 shared_informer.go:247] Caches are synced for daemon sets 
	I0601 11:07:33.484099       1 shared_informer.go:247] Caches are synced for attach detach 
	I0601 11:07:33.522810       1 shared_informer.go:247] Caches are synced for stateful set 
	I0601 11:07:33.526980       1 shared_informer.go:247] Caches are synced for resource quota 
	I0601 11:07:33.535611       1 shared_informer.go:247] Caches are synced for resource quota 
	I0601 11:07:33.887240       1 event.go:294] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-64897985d to 2"
	I0601 11:07:33.937891       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0601 11:07:33.937920       1 garbagecollector.go:155] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0601 11:07:33.958070       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0601 11:07:34.291990       1 event.go:294] "Event occurred" object="kube-system/kindnet" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-7fspq"
	I0601 11:07:34.293024       1 event.go:294] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-slzcl"
	I0601 11:07:34.337886       1 event.go:294] "Event occurred" object="kube-system/coredns-64897985d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-64897985d-zbtdx"
	I0601 11:07:34.342039       1 event.go:294] "Event occurred" object="kube-system/coredns-64897985d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-64897985d-9gcj2"
	I0601 11:07:34.693996       1 event.go:294] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-64897985d to 1"
	I0601 11:07:34.702363       1 event.go:294] "Event occurred" object="kube-system/coredns-64897985d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-64897985d-zbtdx"
	
	* 
	* ==> kube-proxy [313035e9674ffbf03bc7f81b4786fb43b0ffff7b5720ab0e88a6bb8f52d6087d] <==
	* I0601 11:07:34.878114       1 node.go:163] Successfully retrieved node IP: 192.168.49.2
	I0601 11:07:34.878163       1 server_others.go:138] "Detected node IP" address="192.168.49.2"
	I0601 11:07:34.878197       1 server_others.go:561] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0601 11:07:34.900526       1 server_others.go:206] "Using iptables Proxier"
	I0601 11:07:34.900564       1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0601 11:07:34.900573       1 server_others.go:214] "Creating dualStackProxier for iptables"
	I0601 11:07:34.900595       1 server_others.go:491] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
	I0601 11:07:34.900961       1 server.go:656] "Version info" version="v1.23.6"
	I0601 11:07:34.901514       1 config.go:226] "Starting endpoint slice config controller"
	I0601 11:07:34.901535       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0601 11:07:34.901567       1 config.go:317] "Starting service config controller"
	I0601 11:07:34.901573       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0601 11:07:35.002527       1 shared_informer.go:247] Caches are synced for service config 
	I0601 11:07:35.002535       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	
	* 
	* ==> kube-scheduler [0b15aeee4f5511a740a4f3f9ebbba9758b6b511b072da616c68a21c118c7790e] <==
	* W0601 11:07:18.472752       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0601 11:07:18.472806       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0601 11:07:18.472922       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0601 11:07:18.473037       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0601 11:07:18.473083       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0601 11:07:18.473043       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0601 11:07:18.472942       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0601 11:07:18.473159       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0601 11:07:18.473644       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0601 11:07:18.473712       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0601 11:07:18.473647       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0601 11:07:18.473764       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0601 11:07:18.475610       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0601 11:07:18.475814       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0601 11:07:19.293620       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0601 11:07:19.293655       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0601 11:07:19.295513       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0601 11:07:19.295539       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0601 11:07:19.320706       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0601 11:07:19.320741       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0601 11:07:19.376036       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0601 11:07:19.376074       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0601 11:07:19.399236       1 reflector.go:324] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0601 11:07:19.399272       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0601 11:07:22.265287       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Wed 2022-06-01 11:07:03 UTC, end at Wed 2022-06-01 11:19:41 UTC. --
	Jun 01 11:18:35 default-k8s-different-port-20220601110654-6708 kubelet[1317]: I0601 11:18:35.584173    1317 scope.go:110] "RemoveContainer" containerID="937c5d1645be40fa7ab5a0a0f9569191312b4b62cad550f4d1269339e22ade76"
	Jun 01 11:18:35 default-k8s-different-port-20220601110654-6708 kubelet[1317]: E0601 11:18:35.584481    1317 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kindnet-cni pod=kindnet-7fspq_kube-system(eefcd8e6-51e4-4d48-a420-93f4b47cf732)\"" pod="kube-system/kindnet-7fspq" podUID=eefcd8e6-51e4-4d48-a420-93f4b47cf732
	Jun 01 11:18:36 default-k8s-different-port-20220601110654-6708 kubelet[1317]: E0601 11:18:36.962867    1317 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jun 01 11:18:41 default-k8s-different-port-20220601110654-6708 kubelet[1317]: E0601 11:18:41.964090    1317 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jun 01 11:18:46 default-k8s-different-port-20220601110654-6708 kubelet[1317]: E0601 11:18:46.965181    1317 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jun 01 11:18:50 default-k8s-different-port-20220601110654-6708 kubelet[1317]: I0601 11:18:50.585106    1317 scope.go:110] "RemoveContainer" containerID="937c5d1645be40fa7ab5a0a0f9569191312b4b62cad550f4d1269339e22ade76"
	Jun 01 11:18:50 default-k8s-different-port-20220601110654-6708 kubelet[1317]: E0601 11:18:50.585387    1317 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kindnet-cni pod=kindnet-7fspq_kube-system(eefcd8e6-51e4-4d48-a420-93f4b47cf732)\"" pod="kube-system/kindnet-7fspq" podUID=eefcd8e6-51e4-4d48-a420-93f4b47cf732
	Jun 01 11:18:51 default-k8s-different-port-20220601110654-6708 kubelet[1317]: E0601 11:18:51.966429    1317 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jun 01 11:18:56 default-k8s-different-port-20220601110654-6708 kubelet[1317]: E0601 11:18:56.967708    1317 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jun 01 11:19:01 default-k8s-different-port-20220601110654-6708 kubelet[1317]: E0601 11:19:01.968887    1317 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jun 01 11:19:04 default-k8s-different-port-20220601110654-6708 kubelet[1317]: I0601 11:19:04.584531    1317 scope.go:110] "RemoveContainer" containerID="937c5d1645be40fa7ab5a0a0f9569191312b4b62cad550f4d1269339e22ade76"
	Jun 01 11:19:04 default-k8s-different-port-20220601110654-6708 kubelet[1317]: E0601 11:19:04.584937    1317 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kindnet-cni pod=kindnet-7fspq_kube-system(eefcd8e6-51e4-4d48-a420-93f4b47cf732)\"" pod="kube-system/kindnet-7fspq" podUID=eefcd8e6-51e4-4d48-a420-93f4b47cf732
	Jun 01 11:19:06 default-k8s-different-port-20220601110654-6708 kubelet[1317]: E0601 11:19:06.969644    1317 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jun 01 11:19:11 default-k8s-different-port-20220601110654-6708 kubelet[1317]: E0601 11:19:11.971217    1317 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jun 01 11:19:15 default-k8s-different-port-20220601110654-6708 kubelet[1317]: I0601 11:19:15.584386    1317 scope.go:110] "RemoveContainer" containerID="937c5d1645be40fa7ab5a0a0f9569191312b4b62cad550f4d1269339e22ade76"
	Jun 01 11:19:16 default-k8s-different-port-20220601110654-6708 kubelet[1317]: E0601 11:19:16.972574    1317 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jun 01 11:19:21 default-k8s-different-port-20220601110654-6708 kubelet[1317]: E0601 11:19:21.973658    1317 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jun 01 11:19:26 default-k8s-different-port-20220601110654-6708 kubelet[1317]: I0601 11:19:26.563628    1317 scope.go:110] "RemoveContainer" containerID="937c5d1645be40fa7ab5a0a0f9569191312b4b62cad550f4d1269339e22ade76"
	Jun 01 11:19:26 default-k8s-different-port-20220601110654-6708 kubelet[1317]: I0601 11:19:26.929824    1317 scope.go:110] "RemoveContainer" containerID="fec5fcb4aee4a111d94d448605edbdddd36bb33e1c2c06fad87be11f5157efdd"
	Jun 01 11:19:26 default-k8s-different-port-20220601110654-6708 kubelet[1317]: E0601 11:19:26.930069    1317 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kindnet-cni pod=kindnet-7fspq_kube-system(eefcd8e6-51e4-4d48-a420-93f4b47cf732)\"" pod="kube-system/kindnet-7fspq" podUID=eefcd8e6-51e4-4d48-a420-93f4b47cf732
	Jun 01 11:19:26 default-k8s-different-port-20220601110654-6708 kubelet[1317]: E0601 11:19:26.974424    1317 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jun 01 11:19:31 default-k8s-different-port-20220601110654-6708 kubelet[1317]: E0601 11:19:31.975965    1317 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jun 01 11:19:36 default-k8s-different-port-20220601110654-6708 kubelet[1317]: E0601 11:19:36.977187    1317 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jun 01 11:19:37 default-k8s-different-port-20220601110654-6708 kubelet[1317]: I0601 11:19:37.584389    1317 scope.go:110] "RemoveContainer" containerID="fec5fcb4aee4a111d94d448605edbdddd36bb33e1c2c06fad87be11f5157efdd"
	Jun 01 11:19:37 default-k8s-different-port-20220601110654-6708 kubelet[1317]: E0601 11:19:37.584647    1317 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kindnet-cni pod=kindnet-7fspq_kube-system(eefcd8e6-51e4-4d48-a420-93f4b47cf732)\"" pod="kube-system/kindnet-7fspq" podUID=eefcd8e6-51e4-4d48-a420-93f4b47cf732
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-different-port-20220601110654-6708 -n default-k8s-different-port-20220601110654-6708
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-different-port-20220601110654-6708 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:270: non-running pods: busybox coredns-64897985d-9gcj2 storage-provisioner
helpers_test.go:272: ======> post-mortem[TestStartStop/group/default-k8s-different-port/serial/DeployApp]: describe non-running pods <======
helpers_test.go:275: (dbg) Run:  kubectl --context default-k8s-different-port-20220601110654-6708 describe pod busybox coredns-64897985d-9gcj2 storage-provisioner
helpers_test.go:275: (dbg) Non-zero exit: kubectl --context default-k8s-different-port-20220601110654-6708 describe pod busybox coredns-64897985d-9gcj2 storage-provisioner: exit status 1 (58.989654ms)

                                                
                                                
-- stdout --
	Name:         busybox
	Namespace:    default
	Priority:     0
	Node:         <none>
	Labels:       integration-test=busybox
	Annotations:  <none>
	Status:       Pending
	IP:           
	IPs:          <none>
	Containers:
	  busybox:
	    Image:      gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sleep
	      3600
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-c9mjz (ro)
	Conditions:
	  Type           Status
	  PodScheduled   False 
	Volumes:
	  kube-api-access-c9mjz:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason            Age                 From               Message
	  ----     ------            ----                ----               -------
	  Warning  FailedScheduling  49s (x8 over 8m4s)  default-scheduler  0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "coredns-64897985d-9gcj2" not found
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:277: kubectl --context default-k8s-different-port-20220601110654-6708 describe pod busybox coredns-64897985d-9gcj2 storage-provisioner: exit status 1
--- FAIL: TestStartStop/group/default-k8s-different-port/serial/DeployApp (484.43s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (596.07s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:258: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-20220601105850-6708 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.16.0
E0601 11:12:02.301233    6708 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/no-preload-20220601105939-6708/client.crt: no such file or directory
E0601 11:12:12.928511    6708 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/addons-20220601102024-6708/client.crt: no such file or directory
E0601 11:12:21.870310    6708 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/functional-20220601102456-6708/client.crt: no such file or directory
E0601 11:12:54.651945    6708 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/enable-default-cni-20220601104837-6708/client.crt: no such file or directory
E0601 11:13:24.222254    6708 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/no-preload-20220601105939-6708/client.crt: no such file or directory
E0601 11:13:34.904568    6708 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/bridge-20220601104837-6708/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:258: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p old-k8s-version-20220601105850-6708 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.16.0: exit status 80 (9m54.067325276s)

                                                
                                                
-- stdout --
	* [old-k8s-version-20220601105850-6708] minikube v1.26.0-beta.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=14079
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	* Kubernetes 1.23.6 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.23.6
	* Using the docker driver based on existing profile
	* Starting control plane node old-k8s-version-20220601105850-6708 in cluster old-k8s-version-20220601105850-6708
	* Pulling base image ...
	* Restarting existing docker container for "old-k8s-version-20220601105850-6708" ...
	* Preparing Kubernetes v1.16.0 on containerd 1.6.4 ...
	  - kubelet.cni-conf-dir=/etc/cni/net.mk
	* Configuring CNI (Container Networking Interface) ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	  - Using image k8s.gcr.io/echoserver:1.4
	  - Using image kubernetesui/dashboard:v2.5.1
	  - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	* Enabled addons: default-storageclass, storage-provisioner, metrics-server, dashboard
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0601 11:11:53.724295  254820 out.go:296] Setting OutFile to fd 1 ...
	I0601 11:11:53.724454  254820 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0601 11:11:53.724463  254820 out.go:309] Setting ErrFile to fd 2...
	I0601 11:11:53.724469  254820 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0601 11:11:53.724590  254820 root.go:322] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/bin
	I0601 11:11:53.724805  254820 out.go:303] Setting JSON to false
	I0601 11:11:53.726439  254820 start.go:115] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":3268,"bootTime":1654078646,"procs":734,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.13.0-1027-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0601 11:11:53.726494  254820 start.go:125] virtualization: kvm guest
	I0601 11:11:53.729211  254820 out.go:177] * [old-k8s-version-20220601105850-6708] minikube v1.26.0-beta.1 on Ubuntu 20.04 (kvm/amd64)
	I0601 11:11:53.730925  254820 out.go:177]   - MINIKUBE_LOCATION=14079
	I0601 11:11:53.730888  254820 notify.go:193] Checking for updates...
	I0601 11:11:53.732473  254820 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0601 11:11:53.733995  254820 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig
	I0601 11:11:53.735532  254820 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube
	I0601 11:11:53.736910  254820 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0601 11:11:53.738516  254820 config.go:178] Loaded profile config "old-k8s-version-20220601105850-6708": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.16.0
	I0601 11:11:53.740186  254820 out.go:177] * Kubernetes 1.23.6 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.23.6
	I0601 11:11:53.741469  254820 driver.go:358] Setting default libvirt URI to qemu:///system
	I0601 11:11:53.781013  254820 docker.go:137] docker version: linux-20.10.16
	I0601 11:11:53.781160  254820 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0601 11:11:53.886676  254820 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:76 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:51 OomKillDisable:true NGoroutines:49 SystemTime:2022-06-01 11:11:53.812470817 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1027-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662795776 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:20.10.16 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0601 11:11:53.886774  254820 docker.go:254] overlay module found
	I0601 11:11:53.889259  254820 out.go:177] * Using the docker driver based on existing profile
	I0601 11:11:53.890729  254820 start.go:284] selected driver: docker
	I0601 11:11:53.890747  254820 start.go:806] validating driver "docker" against &{Name:old-k8s-version-20220601105850-6708 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-20220601105850-6708 Nam
espace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.5.1@sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] Star
tHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0601 11:11:53.890845  254820 start.go:817] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0601 11:11:53.892179  254820 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0601 11:11:53.993641  254820 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:76 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:51 OomKillDisable:true NGoroutines:49 SystemTime:2022-06-01 11:11:53.921933194 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1027-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662795776 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:20.10.16 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0601 11:11:53.993885  254820 start_flags.go:847] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0601 11:11:53.993910  254820 cni.go:95] Creating CNI manager for ""
	I0601 11:11:53.993917  254820 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0601 11:11:53.993937  254820 start_flags.go:306] config:
	{Name:old-k8s-version-20220601105850-6708 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-20220601105850-6708 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDom
ain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.5.1@sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Su
bnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0601 11:11:53.996416  254820 out.go:177] * Starting control plane node old-k8s-version-20220601105850-6708 in cluster old-k8s-version-20220601105850-6708
	I0601 11:11:53.997807  254820 cache.go:120] Beginning downloading kic base image for docker with containerd
	I0601 11:11:53.999194  254820 out.go:177] * Pulling base image ...
	I0601 11:11:54.000601  254820 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime containerd
	I0601 11:11:54.000647  254820 preload.go:148] Found local preload: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-amd64.tar.lz4
	I0601 11:11:54.000670  254820 cache.go:57] Caching tarball of preloaded images
	I0601 11:11:54.000679  254820 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a in local docker daemon
	I0601 11:11:54.000934  254820 preload.go:174] Found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0601 11:11:54.000953  254820 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on containerd
	I0601 11:11:54.001114  254820 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/old-k8s-version-20220601105850-6708/config.json ...
	I0601 11:11:54.046720  254820 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a in local docker daemon, skipping pull
	I0601 11:11:54.046745  254820 cache.go:141] gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a exists in daemon, skipping load
	I0601 11:11:54.046759  254820 cache.go:206] Successfully downloaded all kic artifacts
	I0601 11:11:54.046793  254820 start.go:352] acquiring machines lock for old-k8s-version-20220601105850-6708: {Name:mke14ebe59a9bafbbc986150da3a88f558d9476c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0601 11:11:54.046889  254820 start.go:356] acquired machines lock for "old-k8s-version-20220601105850-6708" in 67.177µs
	I0601 11:11:54.046910  254820 start.go:94] Skipping create...Using existing machine configuration
	I0601 11:11:54.046918  254820 fix.go:55] fixHost starting: 
	I0601 11:11:54.047139  254820 cli_runner.go:164] Run: docker container inspect old-k8s-version-20220601105850-6708 --format={{.State.Status}}
	I0601 11:11:54.080225  254820 fix.go:103] recreateIfNeeded on old-k8s-version-20220601105850-6708: state=Stopped err=<nil>
	W0601 11:11:54.080259  254820 fix.go:129] unexpected machine state, will restart: <nil>
	I0601 11:11:54.083646  254820 out.go:177] * Restarting existing docker container for "old-k8s-version-20220601105850-6708" ...
	I0601 11:11:54.085069  254820 cli_runner.go:164] Run: docker start old-k8s-version-20220601105850-6708
	I0601 11:11:54.450949  254820 cli_runner.go:164] Run: docker container inspect old-k8s-version-20220601105850-6708 --format={{.State.Status}}
	I0601 11:11:54.486417  254820 kic.go:416] container "old-k8s-version-20220601105850-6708" state is running.
	I0601 11:11:54.486752  254820 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-20220601105850-6708
	I0601 11:11:54.519816  254820 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/old-k8s-version-20220601105850-6708/config.json ...
	I0601 11:11:54.520042  254820 machine.go:88] provisioning docker machine ...
	I0601 11:11:54.520072  254820 ubuntu.go:169] provisioning hostname "old-k8s-version-20220601105850-6708"
	I0601 11:11:54.520109  254820 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220601105850-6708
	I0601 11:11:54.553465  254820 main.go:134] libmachine: Using SSH client type: native
	I0601 11:11:54.553648  254820 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7da240] 0x7dd2a0 <nil>  [] 0s} 127.0.0.1 49422 <nil> <nil>}
	I0601 11:11:54.553669  254820 main.go:134] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-20220601105850-6708 && echo "old-k8s-version-20220601105850-6708" | sudo tee /etc/hostname
	I0601 11:11:54.554337  254820 main.go:134] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:59940->127.0.0.1:49422: read: connection reset by peer
	I0601 11:11:57.680163  254820 main.go:134] libmachine: SSH cmd err, output: <nil>: old-k8s-version-20220601105850-6708
	
	I0601 11:11:57.680238  254820 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220601105850-6708
	I0601 11:11:57.712912  254820 main.go:134] libmachine: Using SSH client type: native
	I0601 11:11:57.713044  254820 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7da240] 0x7dd2a0 <nil>  [] 0s} 127.0.0.1 49422 <nil> <nil>}
	I0601 11:11:57.713068  254820 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-20220601105850-6708' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-20220601105850-6708/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-20220601105850-6708' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0601 11:11:57.827398  254820 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0601 11:11:57.827429  254820 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/c
erts/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube}
	I0601 11:11:57.827447  254820 ubuntu.go:177] setting up certificates
	I0601 11:11:57.827456  254820 provision.go:83] configureAuth start
	I0601 11:11:57.827507  254820 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-20220601105850-6708
	I0601 11:11:57.860729  254820 provision.go:138] copyHostCerts
	I0601 11:11:57.860787  254820 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/key.pem, removing ...
	I0601 11:11:57.860797  254820 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/key.pem
	I0601 11:11:57.860855  254820 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/key.pem (1679 bytes)
	I0601 11:11:57.860943  254820 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.pem, removing ...
	I0601 11:11:57.860956  254820 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.pem
	I0601 11:11:57.860979  254820 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.pem (1082 bytes)
	I0601 11:11:57.861044  254820 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cert.pem, removing ...
	I0601 11:11:57.861063  254820 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cert.pem
	I0601 11:11:57.861088  254820 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cert.pem (1123 bytes)
	I0601 11:11:57.861133  254820 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-20220601105850-6708 san=[192.168.58.2 127.0.0.1 localhost 127.0.0.1 minikube old-k8s-version-20220601105850-6708]
	I0601 11:11:57.961136  254820 provision.go:172] copyRemoteCerts
	I0601 11:11:57.961185  254820 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0601 11:11:57.961216  254820 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220601105850-6708
	I0601 11:11:57.994878  254820 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49422 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/old-k8s-version-20220601105850-6708/id_rsa Username:docker}
	I0601 11:11:58.082923  254820 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/server.pem --> /etc/docker/server.pem (1277 bytes)
	I0601 11:11:58.100651  254820 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0601 11:11:58.117954  254820 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0601 11:11:58.134470  254820 provision.go:86] duration metric: configureAuth took 307.002081ms
	I0601 11:11:58.134499  254820 ubuntu.go:193] setting minikube options for container-runtime
	I0601 11:11:58.134706  254820 config.go:178] Loaded profile config "old-k8s-version-20220601105850-6708": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.16.0
	I0601 11:11:58.134719  254820 machine.go:91] provisioned docker machine in 3.614662703s
	I0601 11:11:58.134727  254820 start.go:306] post-start starting for "old-k8s-version-20220601105850-6708" (driver="docker")
	I0601 11:11:58.134733  254820 start.go:316] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0601 11:11:58.134767  254820 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0601 11:11:58.134798  254820 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220601105850-6708
	I0601 11:11:58.169331  254820 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49422 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/old-k8s-version-20220601105850-6708/id_rsa Username:docker}
	I0601 11:11:58.251012  254820 ssh_runner.go:195] Run: cat /etc/os-release
	I0601 11:11:58.253639  254820 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0601 11:11:58.253666  254820 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0601 11:11:58.253674  254820 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0601 11:11:58.253679  254820 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0601 11:11:58.253688  254820 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/addons for local assets ...
	I0601 11:11:58.253733  254820 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files for local assets ...
	I0601 11:11:58.253805  254820 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files/etc/ssl/certs/67082.pem -> 67082.pem in /etc/ssl/certs
	I0601 11:11:58.253874  254820 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0601 11:11:58.260426  254820 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files/etc/ssl/certs/67082.pem --> /etc/ssl/certs/67082.pem (1708 bytes)
	I0601 11:11:58.276825  254820 start.go:309] post-start completed in 142.084737ms
	I0601 11:11:58.276893  254820 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0601 11:11:58.276932  254820 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220601105850-6708
	I0601 11:11:58.309777  254820 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49422 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/old-k8s-version-20220601105850-6708/id_rsa Username:docker}
	I0601 11:11:58.392285  254820 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0601 11:11:58.396084  254820 fix.go:57] fixHost completed within 4.349162838s
	I0601 11:11:58.396106  254820 start.go:81] releasing machines lock for "old-k8s-version-20220601105850-6708", held for 4.349204206s
	I0601 11:11:58.396170  254820 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-20220601105850-6708
	I0601 11:11:58.430355  254820 ssh_runner.go:195] Run: systemctl --version
	I0601 11:11:58.430387  254820 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0601 11:11:58.430412  254820 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220601105850-6708
	I0601 11:11:58.430449  254820 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220601105850-6708
	I0601 11:11:58.464992  254820 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49422 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/old-k8s-version-20220601105850-6708/id_rsa Username:docker}
	I0601 11:11:58.465404  254820 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49422 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/old-k8s-version-20220601105850-6708/id_rsa Username:docker}
	I0601 11:11:58.570379  254820 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0601 11:11:58.582309  254820 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0601 11:11:58.591899  254820 docker.go:187] disabling docker service ...
	I0601 11:11:58.591951  254820 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0601 11:11:58.601422  254820 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0601 11:11:58.609993  254820 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0601 11:11:58.688299  254820 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0601 11:11:58.763121  254820 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0601 11:11:58.772004  254820 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0601 11:11:58.784253  254820 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*sandbox_image = .*$|sandbox_image = "k8s.gcr.io/pause:3.1"|' -i /etc/containerd/config.toml"
	I0601 11:11:58.791774  254820 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*restrict_oom_score_adj = .*$|restrict_oom_score_adj = false|' -i /etc/containerd/config.toml"
	I0601 11:11:58.799234  254820 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*SystemdCgroup = .*$|SystemdCgroup = false|' -i /etc/containerd/config.toml"
	I0601 11:11:58.806591  254820 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*conf_dir = .*$|conf_dir = "/etc/cni/net.mk"|' -i /etc/containerd/config.toml"
	I0601 11:11:58.813977  254820 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^# imports|imports = ["/etc/containerd/containerd.conf.d/02-containerd.conf"]|' -i /etc/containerd/config.toml"
	I0601 11:11:58.821307  254820 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc/containerd/containerd.conf.d && printf %s "dmVyc2lvbiA9IDIK" | base64 -d | sudo tee /etc/containerd/containerd.conf.d/02-containerd.conf"
	I0601 11:11:58.832988  254820 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0601 11:11:58.839065  254820 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0601 11:11:58.845000  254820 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0601 11:11:58.917963  254820 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0601 11:11:58.988377  254820 start.go:447] Will wait 60s for socket path /run/containerd/containerd.sock
	I0601 11:11:58.988445  254820 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0601 11:11:58.992553  254820 start.go:468] Will wait 60s for crictl version
	I0601 11:11:58.992609  254820 ssh_runner.go:195] Run: sudo crictl version
	I0601 11:11:59.017720  254820 retry.go:31] will retry after 11.04660288s: Temporary Error: sudo crictl version: Process exited with status 1
	stdout:
	
	stderr:
	time="2022-06-01T11:11:59Z" level=fatal msg="getting the runtime version: rpc error: code = Unknown desc = server is not initialized yet"
	I0601 11:12:10.064514  254820 ssh_runner.go:195] Run: sudo crictl version
	I0601 11:12:10.087980  254820 start.go:477] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.6.4
	RuntimeApiVersion:  v1alpha2
	I0601 11:12:10.088034  254820 ssh_runner.go:195] Run: containerd --version
	I0601 11:12:10.114520  254820 ssh_runner.go:195] Run: containerd --version
	I0601 11:12:10.143008  254820 out.go:177] * Preparing Kubernetes v1.16.0 on containerd 1.6.4 ...
	I0601 11:12:10.144459  254820 cli_runner.go:164] Run: docker network inspect old-k8s-version-20220601105850-6708 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0601 11:12:10.175954  254820 ssh_runner.go:195] Run: grep 192.168.58.1	host.minikube.internal$ /etc/hosts
	I0601 11:12:10.179206  254820 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.58.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0601 11:12:10.190006  254820 out.go:177]   - kubelet.cni-conf-dir=/etc/cni/net.mk
	I0601 11:12:10.191479  254820 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime containerd
	I0601 11:12:10.191541  254820 ssh_runner.go:195] Run: sudo crictl images --output json
	I0601 11:12:10.214714  254820 containerd.go:547] all images are preloaded for containerd runtime.
	I0601 11:12:10.214731  254820 containerd.go:461] Images already preloaded, skipping extraction
	I0601 11:12:10.214772  254820 ssh_runner.go:195] Run: sudo crictl images --output json
	I0601 11:12:10.236823  254820 containerd.go:547] all images are preloaded for containerd runtime.
	I0601 11:12:10.236843  254820 cache_images.go:84] Images are preloaded, skipping loading
	I0601 11:12:10.236893  254820 ssh_runner.go:195] Run: sudo crictl info
	I0601 11:12:10.260411  254820 cni.go:95] Creating CNI manager for ""
	I0601 11:12:10.260434  254820 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0601 11:12:10.260447  254820 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0601 11:12:10.260459  254820 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.2 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-20220601105850-6708 NodeName:old-k8s-version-20220601105850-6708 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.58.2 CgroupDriver:cgroupfs
ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0601 11:12:10.260570  254820 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "old-k8s-version-20220601105850-6708"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: old-k8s-version-20220601105850-6708
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.58.2:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0601 11:12:10.260646  254820 kubeadm.go:961] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cni-conf-dir=/etc/cni/net.mk --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=old-k8s-version-20220601105850-6708 --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.58.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-20220601105850-6708 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0601 11:12:10.260694  254820 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I0601 11:12:10.267204  254820 binaries.go:44] Found k8s binaries, skipping transfer
	I0601 11:12:10.267278  254820 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0601 11:12:10.273644  254820 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (580 bytes)
	I0601 11:12:10.285799  254820 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0601 11:12:10.298068  254820 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2154 bytes)
	I0601 11:12:10.309836  254820 ssh_runner.go:195] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I0601 11:12:10.312553  254820 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0601 11:12:10.321352  254820 certs.go:54] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/old-k8s-version-20220601105850-6708 for IP: 192.168.58.2
	I0601 11:12:10.321480  254820 certs.go:182] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.key
	I0601 11:12:10.321521  254820 certs.go:182] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/proxy-client-ca.key
	I0601 11:12:10.321585  254820 certs.go:298] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/old-k8s-version-20220601105850-6708/client.key
	I0601 11:12:10.321635  254820 certs.go:298] skipping minikube signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/old-k8s-version-20220601105850-6708/apiserver.key.cee25041
	I0601 11:12:10.321677  254820 certs.go:298] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/old-k8s-version-20220601105850-6708/proxy-client.key
	I0601 11:12:10.321798  254820 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/6708.pem (1338 bytes)
	W0601 11:12:10.321829  254820 certs.go:384] ignoring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/6708_empty.pem, impossibly tiny 0 bytes
	I0601 11:12:10.321844  254820 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca-key.pem (1679 bytes)
	I0601 11:12:10.321869  254820 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca.pem (1082 bytes)
	I0601 11:12:10.321897  254820 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/cert.pem (1123 bytes)
	I0601 11:12:10.321925  254820 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/key.pem (1679 bytes)
	I0601 11:12:10.321963  254820 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files/etc/ssl/certs/67082.pem (1708 bytes)
	I0601 11:12:10.323295  254820 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/old-k8s-version-20220601105850-6708/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0601 11:12:10.340236  254820 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/old-k8s-version-20220601105850-6708/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0601 11:12:10.356275  254820 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/old-k8s-version-20220601105850-6708/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0601 11:12:10.372465  254820 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/old-k8s-version-20220601105850-6708/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0601 11:12:10.388489  254820 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0601 11:12:10.405538  254820 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0601 11:12:10.422562  254820 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0601 11:12:10.438770  254820 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0601 11:12:10.455124  254820 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files/etc/ssl/certs/67082.pem --> /usr/share/ca-certificates/67082.pem (1708 bytes)
	I0601 11:12:10.471097  254820 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0601 11:12:10.487520  254820 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/6708.pem --> /usr/share/ca-certificates/6708.pem (1338 bytes)
	I0601 11:12:10.503501  254820 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0601 11:12:10.515524  254820 ssh_runner.go:195] Run: openssl version
	I0601 11:12:10.519915  254820 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0601 11:12:10.526849  254820 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0601 11:12:10.529705  254820 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Jun  1 10:20 /usr/share/ca-certificates/minikubeCA.pem
	I0601 11:12:10.529751  254820 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0601 11:12:10.534305  254820 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0601 11:12:10.540612  254820 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6708.pem && ln -fs /usr/share/ca-certificates/6708.pem /etc/ssl/certs/6708.pem"
	I0601 11:12:10.547333  254820 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6708.pem
	I0601 11:12:10.550210  254820 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Jun  1 10:24 /usr/share/ca-certificates/6708.pem
	I0601 11:12:10.550256  254820 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6708.pem
	I0601 11:12:10.554544  254820 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6708.pem /etc/ssl/certs/51391683.0"
	I0601 11:12:10.560832  254820 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/67082.pem && ln -fs /usr/share/ca-certificates/67082.pem /etc/ssl/certs/67082.pem"
	I0601 11:12:10.567891  254820 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/67082.pem
	I0601 11:12:10.570691  254820 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Jun  1 10:24 /usr/share/ca-certificates/67082.pem
	I0601 11:12:10.570741  254820 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/67082.pem
	I0601 11:12:10.575235  254820 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/67082.pem /etc/ssl/certs/3ec20f2e.0"
	I0601 11:12:10.581826  254820 kubeadm.go:395] StartCluster: {Name:old-k8s-version-20220601105850-6708 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-20220601105850-6708 Namespace:default APISe
rverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.5.1@sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s Sc
heduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0601 11:12:10.581915  254820 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0601 11:12:10.581951  254820 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0601 11:12:10.606031  254820 cri.go:87] found id: "bc215458271657413ab56e24b8958038bee4a907217ca9bc43a5ecc1e2339443"
	I0601 11:12:10.606058  254820 cri.go:87] found id: "01651d3598805140172b9f0f86349cd8cad0f336647501ce25f9120bcb1f7dc3"
	I0601 11:12:10.606065  254820 cri.go:87] found id: "0b9cf8973c8844f5d3f241696625e5764fbd79a0c0fa64202fca8a67567e726a"
	I0601 11:12:10.606071  254820 cri.go:87] found id: "f18885873e44ef000cea8b73305d4b972b24f41b3a821ebf6ed2fbb3c400745d"
	I0601 11:12:10.606076  254820 cri.go:87] found id: "92f272874915c4877257c68e1d43539f7183cbef97f4b0837113afe72f1cdb3c"
	I0601 11:12:10.606082  254820 cri.go:87] found id: "e4d08ecd5adee34f6ccfaeb042d497cedc44597ee436ef3a30c0c98e725c3582"
	I0601 11:12:10.606091  254820 cri.go:87] found id: ""
	I0601 11:12:10.606124  254820 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0601 11:12:10.618963  254820 cri.go:114] JSON = null
	W0601 11:12:10.619017  254820 kubeadm.go:402] unpause failed: list paused: list returned 0 containers, but ps returned 6
	I0601 11:12:10.619081  254820 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0601 11:12:10.626273  254820 kubeadm.go:410] found existing configuration files, will attempt cluster restart
	I0601 11:12:10.626294  254820 kubeadm.go:626] restartCluster start
	I0601 11:12:10.626333  254820 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0601 11:12:10.641469  254820 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0601 11:12:10.642817  254820 kubeconfig.go:116] verify returned: extract IP: "old-k8s-version-20220601105850-6708" does not appear in /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig
	I0601 11:12:10.643836  254820 kubeconfig.go:127] "old-k8s-version-20220601105850-6708" context is missing from /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig - will repair!
	I0601 11:12:10.644983  254820 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig: {Name:mkb0e16236e54fbc8651999f1dd70854c53de7db Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0601 11:12:10.646922  254820 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0601 11:12:10.653994  254820 api_server.go:165] Checking apiserver status ...
	I0601 11:12:10.654033  254820 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 11:12:10.661444  254820 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 11:12:10.861824  254820 api_server.go:165] Checking apiserver status ...
	I0601 11:12:10.861905  254820 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 11:12:10.870216  254820 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 11:12:11.062527  254820 api_server.go:165] Checking apiserver status ...
	I0601 11:12:11.062605  254820 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 11:12:11.071092  254820 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 11:12:11.262404  254820 api_server.go:165] Checking apiserver status ...
	I0601 11:12:11.262469  254820 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 11:12:11.270840  254820 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 11:12:11.462064  254820 api_server.go:165] Checking apiserver status ...
	I0601 11:12:11.462144  254820 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 11:12:11.470854  254820 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 11:12:11.662095  254820 api_server.go:165] Checking apiserver status ...
	I0601 11:12:11.662174  254820 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 11:12:11.670551  254820 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 11:12:11.861611  254820 api_server.go:165] Checking apiserver status ...
	I0601 11:12:11.861691  254820 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 11:12:11.870514  254820 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 11:12:12.061849  254820 api_server.go:165] Checking apiserver status ...
	I0601 11:12:12.061940  254820 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 11:12:12.070572  254820 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 11:12:12.261807  254820 api_server.go:165] Checking apiserver status ...
	I0601 11:12:12.261888  254820 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 11:12:12.270551  254820 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 11:12:12.461836  254820 api_server.go:165] Checking apiserver status ...
	I0601 11:12:12.461895  254820 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 11:12:12.470438  254820 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 11:12:12.661680  254820 api_server.go:165] Checking apiserver status ...
	I0601 11:12:12.661748  254820 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 11:12:12.670629  254820 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 11:12:12.861708  254820 api_server.go:165] Checking apiserver status ...
	I0601 11:12:12.861779  254820 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 11:12:12.870225  254820 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 11:12:13.062547  254820 api_server.go:165] Checking apiserver status ...
	I0601 11:12:13.062601  254820 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 11:12:13.071163  254820 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 11:12:13.262462  254820 api_server.go:165] Checking apiserver status ...
	I0601 11:12:13.262528  254820 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 11:12:13.271173  254820 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 11:12:13.462425  254820 api_server.go:165] Checking apiserver status ...
	I0601 11:12:13.462486  254820 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 11:12:13.470851  254820 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 11:12:13.662084  254820 api_server.go:165] Checking apiserver status ...
	I0601 11:12:13.662151  254820 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 11:12:13.670715  254820 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 11:12:13.670736  254820 api_server.go:165] Checking apiserver status ...
	I0601 11:12:13.670772  254820 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 11:12:13.678432  254820 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 11:12:13.678452  254820 kubeadm.go:601] needs reconfigure: apiserver error: timed out waiting for the condition
	I0601 11:12:13.678458  254820 kubeadm.go:1092] stopping kube-system containers ...
	I0601 11:12:13.678469  254820 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I0601 11:12:13.678512  254820 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0601 11:12:13.701969  254820 cri.go:87] found id: "bc215458271657413ab56e24b8958038bee4a907217ca9bc43a5ecc1e2339443"
	I0601 11:12:13.701997  254820 cri.go:87] found id: "01651d3598805140172b9f0f86349cd8cad0f336647501ce25f9120bcb1f7dc3"
	I0601 11:12:13.702008  254820 cri.go:87] found id: "0b9cf8973c8844f5d3f241696625e5764fbd79a0c0fa64202fca8a67567e726a"
	I0601 11:12:13.702018  254820 cri.go:87] found id: "f18885873e44ef000cea8b73305d4b972b24f41b3a821ebf6ed2fbb3c400745d"
	I0601 11:12:13.702031  254820 cri.go:87] found id: "92f272874915c4877257c68e1d43539f7183cbef97f4b0837113afe72f1cdb3c"
	I0601 11:12:13.702046  254820 cri.go:87] found id: "e4d08ecd5adee34f6ccfaeb042d497cedc44597ee436ef3a30c0c98e725c3582"
	I0601 11:12:13.702053  254820 cri.go:87] found id: ""
	I0601 11:12:13.702061  254820 cri.go:232] Stopping containers: [bc215458271657413ab56e24b8958038bee4a907217ca9bc43a5ecc1e2339443 01651d3598805140172b9f0f86349cd8cad0f336647501ce25f9120bcb1f7dc3 0b9cf8973c8844f5d3f241696625e5764fbd79a0c0fa64202fca8a67567e726a f18885873e44ef000cea8b73305d4b972b24f41b3a821ebf6ed2fbb3c400745d 92f272874915c4877257c68e1d43539f7183cbef97f4b0837113afe72f1cdb3c e4d08ecd5adee34f6ccfaeb042d497cedc44597ee436ef3a30c0c98e725c3582]
	I0601 11:12:13.702100  254820 ssh_runner.go:195] Run: which crictl
	I0601 11:12:13.704800  254820 ssh_runner.go:195] Run: sudo /usr/bin/crictl stop bc215458271657413ab56e24b8958038bee4a907217ca9bc43a5ecc1e2339443 01651d3598805140172b9f0f86349cd8cad0f336647501ce25f9120bcb1f7dc3 0b9cf8973c8844f5d3f241696625e5764fbd79a0c0fa64202fca8a67567e726a f18885873e44ef000cea8b73305d4b972b24f41b3a821ebf6ed2fbb3c400745d 92f272874915c4877257c68e1d43539f7183cbef97f4b0837113afe72f1cdb3c e4d08ecd5adee34f6ccfaeb042d497cedc44597ee436ef3a30c0c98e725c3582
	I0601 11:12:13.728047  254820 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0601 11:12:13.737824  254820 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0601 11:12:13.744570  254820 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5747 Jun  1 10:59 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5779 Jun  1 10:59 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5927 Jun  1 10:59 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5727 Jun  1 10:59 /etc/kubernetes/scheduler.conf
	
	I0601 11:12:13.744618  254820 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0601 11:12:13.750998  254820 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0601 11:12:13.757437  254820 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0601 11:12:13.763762  254820 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0601 11:12:13.770049  254820 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0601 11:12:13.776537  254820 kubeadm.go:703] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0601 11:12:13.776557  254820 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0601 11:12:13.828815  254820 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0601 11:12:14.942688  254820 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.113830666s)
	I0601 11:12:14.942720  254820 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0601 11:12:15.094808  254820 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0601 11:12:15.154069  254820 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0601 11:12:15.277219  254820 api_server.go:51] waiting for apiserver process to appear ...
	I0601 11:12:15.277276  254820 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:12:15.786866  254820 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:12:16.287245  254820 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:12:16.362352  254820 api_server.go:71] duration metric: took 1.085132431s to wait for apiserver process to appear ...
	I0601 11:12:16.362382  254820 api_server.go:87] waiting for apiserver healthz status ...
	I0601 11:12:16.362394  254820 api_server.go:240] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I0601 11:12:20.045299  254820 api_server.go:266] https://192.168.58.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\": RBAC: clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found","reason":"Forbidden","details":{},"code":403}
	W0601 11:12:20.045324  254820 api_server.go:102] status: https://192.168.58.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\": RBAC: clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found","reason":"Forbidden","details":{},"code":403}
	I0601 11:12:20.546002  254820 api_server.go:240] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I0601 11:12:20.556401  254820 api_server.go:266] https://192.168.58.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0601 11:12:20.556432  254820 api_server.go:102] status: https://192.168.58.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0601 11:12:21.045941  254820 api_server.go:240] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I0601 11:12:21.055969  254820 api_server.go:266] https://192.168.58.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0601 11:12:21.056002  254820 api_server.go:102] status: https://192.168.58.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0601 11:12:21.545490  254820 api_server.go:240] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I0601 11:12:21.549962  254820 api_server.go:266] https://192.168.58.2:8443/healthz returned 200:
	ok
	I0601 11:12:21.555780  254820 api_server.go:140] control plane version: v1.16.0
	I0601 11:12:21.555800  254820 api_server.go:130] duration metric: took 5.193412064s to wait for apiserver health ...
	I0601 11:12:21.555809  254820 cni.go:95] Creating CNI manager for ""
	I0601 11:12:21.555817  254820 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0601 11:12:21.558239  254820 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0601 11:12:21.559734  254820 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0601 11:12:21.563356  254820 cni.go:189] applying CNI manifest using /var/lib/minikube/binaries/v1.16.0/kubectl ...
	I0601 11:12:21.563372  254820 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0601 11:12:21.576246  254820 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0601 11:12:21.768435  254820 system_pods.go:43] waiting for kube-system pods to appear ...
	I0601 11:12:21.773534  254820 system_pods.go:59] 8 kube-system pods found
	I0601 11:12:21.773572  254820 system_pods.go:61] "coredns-5644d7b6d9-5z28m" [b8f125e8-150b-4192-8d5b-60552dc856b9] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.)
	I0601 11:12:21.773581  254820 system_pods.go:61] "etcd-old-k8s-version-20220601105850-6708" [469f08a7-0e18-4712-b991-5df1fd26ab24] Running
	I0601 11:12:21.773594  254820 system_pods.go:61] "kindnet-rvdm8" [0648d955-2d20-449d-88b9-57fb087825d8] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0601 11:12:21.773602  254820 system_pods.go:61] "kube-apiserver-old-k8s-version-20220601105850-6708" [f680e291-6ab4-45e9-bdfe-dabeb602e24d] Running
	I0601 11:12:21.773611  254820 system_pods.go:61] "kube-controller-manager-old-k8s-version-20220601105850-6708" [4dfd2702-2773-43da-8434-de5f8a4c90cf] Running
	I0601 11:12:21.773615  254820 system_pods.go:61] "kube-proxy-9db28" [8cae7678-59a9-4d84-b561-a852eacc0638] Running
	I0601 11:12:21.773619  254820 system_pods.go:61] "kube-scheduler-old-k8s-version-20220601105850-6708" [ce172f62-5bfa-49a1-8508-fe73d644b996] Running
	I0601 11:12:21.773624  254820 system_pods.go:61] "storage-provisioner" [dad57bc8-559a-4344-85f0-27b88be62a51] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.)
	I0601 11:12:21.773632  254820 system_pods.go:74] duration metric: took 5.179129ms to wait for pod list to return data ...
	I0601 11:12:21.773641  254820 node_conditions.go:102] verifying NodePressure condition ...
	I0601 11:12:21.775852  254820 node_conditions.go:122] node storage ephemeral capacity is 304695084Ki
	I0601 11:12:21.775924  254820 node_conditions.go:123] node cpu capacity is 8
	I0601 11:12:21.775936  254820 node_conditions.go:105] duration metric: took 2.288061ms to run NodePressure ...
	I0601 11:12:21.775955  254820 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0601 11:12:21.920239  254820 kubeadm.go:762] waiting for restarted kubelet to initialise ...
	I0601 11:12:21.923094  254820 retry.go:31] will retry after 360.127272ms: kubelet not initialised
	I0601 11:12:22.286735  254820 retry.go:31] will retry after 436.71002ms: kubelet not initialised
	I0601 11:12:22.727917  254820 retry.go:31] will retry after 527.46423ms: kubelet not initialised
	I0601 11:12:23.259004  254820 retry.go:31] will retry after 780.162888ms: kubelet not initialised
	I0601 11:12:24.043236  254820 retry.go:31] will retry after 1.502072952s: kubelet not initialised
	I0601 11:12:25.549138  254820 retry.go:31] will retry after 1.073826528s: kubelet not initialised
	I0601 11:12:26.627452  254820 retry.go:31] will retry after 1.869541159s: kubelet not initialised
	I0601 11:12:28.501887  254820 retry.go:31] will retry after 2.549945972s: kubelet not initialised
	I0601 11:12:31.055802  254820 retry.go:31] will retry after 5.131623747s: kubelet not initialised
	I0601 11:12:36.191718  254820 retry.go:31] will retry after 9.757045979s: kubelet not initialised
	I0601 11:12:45.953807  254820 retry.go:31] will retry after 18.937774914s: kubelet not initialised
	I0601 11:13:04.895650  254820 retry.go:31] will retry after 15.44552029s: kubelet not initialised
	I0601 11:13:20.344982  254820 kubeadm.go:777] kubelet initialised
	I0601 11:13:20.345004  254820 kubeadm.go:778] duration metric: took 58.424744198s waiting for restarted kubelet to initialise ...
	I0601 11:13:20.345011  254820 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0601 11:13:20.349667  254820 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5644d7b6d9-5z28m" in "kube-system" namespace to be "Ready" ...
	I0601 11:13:22.355543  254820 pod_ready.go:102] pod "coredns-5644d7b6d9-5z28m" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 10:59:44 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:13:24.355937  254820 pod_ready.go:102] pod "coredns-5644d7b6d9-5z28m" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 10:59:44 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:13:26.856125  254820 pod_ready.go:102] pod "coredns-5644d7b6d9-5z28m" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 10:59:44 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:13:29.355958  254820 pod_ready.go:102] pod "coredns-5644d7b6d9-5z28m" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 10:59:44 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:13:31.855156  254820 pod_ready.go:102] pod "coredns-5644d7b6d9-5z28m" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 10:59:44 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:13:33.855562  254820 pod_ready.go:102] pod "coredns-5644d7b6d9-5z28m" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 10:59:44 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:13:35.855608  254820 pod_ready.go:102] pod "coredns-5644d7b6d9-5z28m" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 10:59:44 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:13:38.355374  254820 pod_ready.go:102] pod "coredns-5644d7b6d9-5z28m" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 10:59:44 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:13:40.356121  254820 pod_ready.go:102] pod "coredns-5644d7b6d9-5z28m" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 10:59:44 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:13:42.855370  254820 pod_ready.go:102] pod "coredns-5644d7b6d9-5z28m" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 10:59:44 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:13:45.355437  254820 pod_ready.go:102] pod "coredns-5644d7b6d9-5z28m" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 10:59:44 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:13:47.855855  254820 pod_ready.go:102] pod "coredns-5644d7b6d9-5z28m" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 10:59:44 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:13:50.355237  254820 pod_ready.go:102] pod "coredns-5644d7b6d9-5z28m" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 10:59:44 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:13:52.355418  254820 pod_ready.go:102] pod "coredns-5644d7b6d9-5z28m" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 10:59:44 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:13:54.855421  254820 pod_ready.go:102] pod "coredns-5644d7b6d9-5z28m" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 10:59:44 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:13:57.355986  254820 pod_ready.go:102] pod "coredns-5644d7b6d9-5z28m" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 10:59:44 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:13:59.855206  254820 pod_ready.go:102] pod "coredns-5644d7b6d9-5z28m" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 10:59:44 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:14:01.855459  254820 pod_ready.go:102] pod "coredns-5644d7b6d9-5z28m" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 10:59:44 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:14:03.855992  254820 pod_ready.go:102] pod "coredns-5644d7b6d9-5z28m" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 10:59:44 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:14:06.355891  254820 pod_ready.go:102] pod "coredns-5644d7b6d9-5z28m" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 10:59:44 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:14:08.855419  254820 pod_ready.go:102] pod "coredns-5644d7b6d9-5z28m" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 10:59:44 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:14:10.856179  254820 pod_ready.go:102] pod "coredns-5644d7b6d9-5z28m" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 10:59:44 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:14:13.355173  254820 pod_ready.go:102] pod "coredns-5644d7b6d9-5z28m" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 10:59:44 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:14:15.355398  254820 pod_ready.go:102] pod "coredns-5644d7b6d9-5z28m" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 10:59:44 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:14:17.356230  254820 pod_ready.go:102] pod "coredns-5644d7b6d9-5z28m" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 10:59:44 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:14:19.356317  254820 pod_ready.go:102] pod "coredns-5644d7b6d9-5z28m" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 10:59:44 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:14:21.855447  254820 pod_ready.go:102] pod "coredns-5644d7b6d9-5z28m" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 10:59:44 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:14:23.855751  254820 pod_ready.go:102] pod "coredns-5644d7b6d9-5z28m" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 10:59:44 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:14:26.563524  254820 pod_ready.go:102] pod "coredns-5644d7b6d9-5z28m" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 10:59:44 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:14:28.855363  254820 pod_ready.go:102] pod "coredns-5644d7b6d9-5z28m" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 10:59:44 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:14:30.855592  254820 pod_ready.go:102] pod "coredns-5644d7b6d9-5z28m" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 10:59:44 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:14:33.355141  254820 pod_ready.go:102] pod "coredns-5644d7b6d9-5z28m" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 10:59:44 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:14:35.355566  254820 pod_ready.go:102] pod "coredns-5644d7b6d9-5z28m" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 10:59:44 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:14:37.855324  254820 pod_ready.go:102] pod "coredns-5644d7b6d9-5z28m" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 10:59:44 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:14:40.356177  254820 pod_ready.go:102] pod "coredns-5644d7b6d9-5z28m" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 10:59:44 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:14:42.855697  254820 pod_ready.go:102] pod "coredns-5644d7b6d9-5z28m" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 10:59:44 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:14:45.355929  254820 pod_ready.go:102] pod "coredns-5644d7b6d9-5z28m" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 10:59:44 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:14:47.856117  254820 pod_ready.go:102] pod "coredns-5644d7b6d9-5z28m" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 10:59:44 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:14:50.355066  254820 pod_ready.go:102] pod "coredns-5644d7b6d9-5z28m" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 10:59:44 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:14:52.356639  254820 pod_ready.go:102] pod "coredns-5644d7b6d9-5z28m" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 10:59:44 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:14:54.855044  254820 pod_ready.go:102] pod "coredns-5644d7b6d9-5z28m" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 10:59:44 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:14:56.855324  254820 pod_ready.go:102] pod "coredns-5644d7b6d9-5z28m" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 10:59:44 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:14:59.356001  254820 pod_ready.go:102] pod "coredns-5644d7b6d9-5z28m" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 10:59:44 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:15:01.855506  254820 pod_ready.go:102] pod "coredns-5644d7b6d9-5z28m" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 10:59:44 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:15:03.855859  254820 pod_ready.go:102] pod "coredns-5644d7b6d9-5z28m" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 10:59:44 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:15:05.856078  254820 pod_ready.go:102] pod "coredns-5644d7b6d9-5z28m" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 10:59:44 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:15:08.355507  254820 pod_ready.go:102] pod "coredns-5644d7b6d9-5z28m" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 10:59:44 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:15:10.855054  254820 pod_ready.go:102] pod "coredns-5644d7b6d9-5z28m" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 10:59:44 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:15:12.855099  254820 pod_ready.go:102] pod "coredns-5644d7b6d9-5z28m" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 10:59:44 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:15:15.356006  254820 pod_ready.go:102] pod "coredns-5644d7b6d9-5z28m" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 10:59:44 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:15:17.855238  254820 pod_ready.go:102] pod "coredns-5644d7b6d9-5z28m" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 10:59:44 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:15:20.355112  254820 pod_ready.go:102] pod "coredns-5644d7b6d9-5z28m" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 10:59:44 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:15:22.855447  254820 pod_ready.go:102] pod "coredns-5644d7b6d9-5z28m" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 10:59:44 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:15:25.356061  254820 pod_ready.go:102] pod "coredns-5644d7b6d9-5z28m" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 10:59:44 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:15:27.855425  254820 pod_ready.go:102] pod "coredns-5644d7b6d9-5z28m" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 10:59:44 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:15:29.855609  254820 pod_ready.go:102] pod "coredns-5644d7b6d9-5z28m" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 10:59:44 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:15:32.355089  254820 pod_ready.go:102] pod "coredns-5644d7b6d9-5z28m" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 10:59:44 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:15:34.355560  254820 pod_ready.go:102] pod "coredns-5644d7b6d9-5z28m" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 10:59:44 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:15:36.855139  254820 pod_ready.go:102] pod "coredns-5644d7b6d9-5z28m" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 10:59:44 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:15:38.855353  254820 pod_ready.go:102] pod "coredns-5644d7b6d9-5z28m" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 10:59:44 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:15:41.355553  254820 pod_ready.go:102] pod "coredns-5644d7b6d9-5z28m" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 10:59:44 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:15:43.855489  254820 pod_ready.go:102] pod "coredns-5644d7b6d9-5z28m" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 10:59:44 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:15:45.856143  254820 pod_ready.go:102] pod "coredns-5644d7b6d9-5z28m" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 10:59:44 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:15:48.355487  254820 pod_ready.go:102] pod "coredns-5644d7b6d9-5z28m" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 10:59:44 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:15:50.856156  254820 pod_ready.go:102] pod "coredns-5644d7b6d9-5z28m" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 10:59:44 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:15:53.355090  254820 pod_ready.go:102] pod "coredns-5644d7b6d9-5z28m" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 10:59:44 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:15:55.356481  254820 pod_ready.go:102] pod "coredns-5644d7b6d9-5z28m" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 10:59:44 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:15:57.855372  254820 pod_ready.go:102] pod "coredns-5644d7b6d9-5z28m" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 10:59:44 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:15:59.855419  254820 pod_ready.go:102] pod "coredns-5644d7b6d9-5z28m" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 10:59:44 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:16:02.354744  254820 pod_ready.go:102] pod "coredns-5644d7b6d9-5z28m" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 10:59:44 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:16:04.355179  254820 pod_ready.go:102] pod "coredns-5644d7b6d9-5z28m" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 10:59:44 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:16:06.355418  254820 pod_ready.go:102] pod "coredns-5644d7b6d9-5z28m" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 10:59:44 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:16:08.355954  254820 pod_ready.go:102] pod "coredns-5644d7b6d9-5z28m" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 10:59:44 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:16:10.855982  254820 pod_ready.go:102] pod "coredns-5644d7b6d9-5z28m" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 10:59:44 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:16:13.355421  254820 pod_ready.go:102] pod "coredns-5644d7b6d9-5z28m" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 10:59:44 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:16:15.355541  254820 pod_ready.go:102] pod "coredns-5644d7b6d9-5z28m" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 10:59:44 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:16:17.855083  254820 pod_ready.go:102] pod "coredns-5644d7b6d9-5z28m" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 10:59:44 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:16:19.855300  254820 pod_ready.go:102] pod "coredns-5644d7b6d9-5z28m" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 10:59:44 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:16:21.855513  254820 pod_ready.go:102] pod "coredns-5644d7b6d9-5z28m" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 10:59:44 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:16:24.354878  254820 pod_ready.go:102] pod "coredns-5644d7b6d9-5z28m" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 10:59:44 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:16:26.855526  254820 pod_ready.go:102] pod "coredns-5644d7b6d9-5z28m" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 10:59:44 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:16:29.355547  254820 pod_ready.go:102] pod "coredns-5644d7b6d9-5z28m" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 10:59:44 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:16:31.855558  254820 pod_ready.go:102] pod "coredns-5644d7b6d9-5z28m" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 10:59:44 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:16:33.855768  254820 pod_ready.go:102] pod "coredns-5644d7b6d9-5z28m" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 10:59:44 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:16:35.855971  254820 pod_ready.go:102] pod "coredns-5644d7b6d9-5z28m" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 10:59:44 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:16:38.355081  254820 pod_ready.go:102] pod "coredns-5644d7b6d9-5z28m" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 10:59:44 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:16:40.355331  254820 pod_ready.go:102] pod "coredns-5644d7b6d9-5z28m" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 10:59:44 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:16:42.855945  254820 pod_ready.go:102] pod "coredns-5644d7b6d9-5z28m" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 10:59:44 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:16:45.354907  254820 pod_ready.go:102] pod "coredns-5644d7b6d9-5z28m" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 10:59:44 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:16:47.355242  254820 pod_ready.go:102] pod "coredns-5644d7b6d9-5z28m" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 10:59:44 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:16:49.355631  254820 pod_ready.go:102] pod "coredns-5644d7b6d9-5z28m" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 10:59:44 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:16:51.854986  254820 pod_ready.go:102] pod "coredns-5644d7b6d9-5z28m" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 10:59:44 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:16:53.855517  254820 pod_ready.go:102] pod "coredns-5644d7b6d9-5z28m" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 10:59:44 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:16:56.355928  254820 pod_ready.go:102] pod "coredns-5644d7b6d9-5z28m" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 10:59:44 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:16:58.855967  254820 pod_ready.go:102] pod "coredns-5644d7b6d9-5z28m" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 10:59:44 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:17:01.355221  254820 pod_ready.go:102] pod "coredns-5644d7b6d9-5z28m" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 10:59:44 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:17:03.356805  254820 pod_ready.go:102] pod "coredns-5644d7b6d9-5z28m" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 10:59:44 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:17:05.855005  254820 pod_ready.go:102] pod "coredns-5644d7b6d9-5z28m" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 10:59:44 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:17:07.855118  254820 pod_ready.go:102] pod "coredns-5644d7b6d9-5z28m" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 10:59:44 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:17:09.855246  254820 pod_ready.go:102] pod "coredns-5644d7b6d9-5z28m" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 10:59:44 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:17:11.855357  254820 pod_ready.go:102] pod "coredns-5644d7b6d9-5z28m" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 10:59:44 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:17:14.355967  254820 pod_ready.go:102] pod "coredns-5644d7b6d9-5z28m" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 10:59:44 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:17:16.855640  254820 pod_ready.go:102] pod "coredns-5644d7b6d9-5z28m" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 10:59:44 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:17:19.355251  254820 pod_ready.go:102] pod "coredns-5644d7b6d9-5z28m" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 10:59:44 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:17:20.353024  254820 pod_ready.go:81] duration metric: took 4m0.003317239s waiting for pod "coredns-5644d7b6d9-5z28m" in "kube-system" namespace to be "Ready" ...
	E0601 11:17:20.353048  254820 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "coredns-5644d7b6d9-5z28m" in "kube-system" namespace to be "Ready" (will not retry!)
	I0601 11:17:20.353067  254820 pod_ready.go:38] duration metric: took 4m0.008046261s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0601 11:17:20.353090  254820 kubeadm.go:630] restartCluster took 5m9.726790355s
	W0601 11:17:20.353201  254820 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0601 11:17:20.353229  254820 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I0601 11:17:21.540348  254820 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force": (1.18709622s)
	I0601 11:17:21.540403  254820 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0601 11:17:21.550073  254820 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0601 11:17:21.557225  254820 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0601 11:17:21.557279  254820 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0601 11:17:21.564483  254820 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0601 11:17:21.564542  254820 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0601 11:17:21.918912  254820 out.go:204]   - Generating certificates and keys ...
	I0601 11:17:22.738748  254820 out.go:204]   - Booting up control plane ...
	I0601 11:17:31.781643  254820 out.go:204]   - Configuring RBAC rules ...
	I0601 11:17:32.197904  254820 cni.go:95] Creating CNI manager for ""
	I0601 11:17:32.197928  254820 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0601 11:17:32.199768  254820 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0601 11:17:32.201271  254820 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0601 11:17:32.204901  254820 cni.go:189] applying CNI manifest using /var/lib/minikube/binaries/v1.16.0/kubectl ...
	I0601 11:17:32.204927  254820 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0601 11:17:32.218979  254820 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0601 11:17:32.428375  254820 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0601 11:17:32.428491  254820 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:17:32.428492  254820 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl label nodes minikube.k8s.io/version=v1.26.0-beta.1 minikube.k8s.io/commit=4a356b2b7b41c6be3e1e342298908c27bb98ce92 minikube.k8s.io/name=old-k8s-version-20220601105850-6708 minikube.k8s.io/updated_at=2022_06_01T11_17_32_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:17:32.435199  254820 ops.go:34] apiserver oom_adj: -16
	I0601 11:17:32.502431  254820 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:17:33.111963  254820 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:17:33.611972  254820 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:17:34.111941  254820 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:17:34.612095  254820 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:17:35.112547  254820 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:17:35.612182  254820 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:17:36.111959  254820 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:17:36.612404  254820 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:17:37.111711  254820 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:17:37.612424  254820 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:17:38.112268  254820 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:17:38.612434  254820 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:17:39.112426  254820 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:17:39.612518  254820 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:17:40.111917  254820 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:17:40.612600  254820 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:17:41.112173  254820 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:17:41.611972  254820 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:17:42.112578  254820 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:17:42.611825  254820 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:17:43.111943  254820 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:17:43.611988  254820 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:17:44.111909  254820 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:17:44.612461  254820 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:17:45.111634  254820 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:17:45.611836  254820 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:17:46.111993  254820 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:17:46.612464  254820 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:17:47.112043  254820 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:17:47.179274  254820 kubeadm.go:1045] duration metric: took 14.750830236s to wait for elevateKubeSystemPrivileges.
	I0601 11:17:47.179303  254820 kubeadm.go:397] StartCluster complete in 5m36.59748449s
	I0601 11:17:47.179319  254820 settings.go:142] acquiring lock: {Name:mk20a847233fa50399ab0a24280bffb8d8dbd41b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0601 11:17:47.179406  254820 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig
	I0601 11:17:47.180983  254820 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig: {Name:mkb0e16236e54fbc8651999f1dd70854c53de7db Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0601 11:17:47.695922  254820 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "old-k8s-version-20220601105850-6708" rescaled to 1
	I0601 11:17:47.695995  254820 start.go:208] Will wait 6m0s for node &{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0601 11:17:47.699321  254820 out.go:177] * Verifying Kubernetes components...
	I0601 11:17:47.696036  254820 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0601 11:17:47.696052  254820 addons.go:415] enableAddons start: toEnable=map[dashboard:true metrics-server:true], additional=[]
	I0601 11:17:47.696246  254820 config.go:178] Loaded profile config "old-k8s-version-20220601105850-6708": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.16.0
	I0601 11:17:47.700668  254820 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0601 11:17:47.700702  254820 addons.go:65] Setting storage-provisioner=true in profile "old-k8s-version-20220601105850-6708"
	I0601 11:17:47.700714  254820 addons.go:65] Setting metrics-server=true in profile "old-k8s-version-20220601105850-6708"
	I0601 11:17:47.700720  254820 addons.go:65] Setting dashboard=true in profile "old-k8s-version-20220601105850-6708"
	I0601 11:17:47.700726  254820 addons.go:153] Setting addon storage-provisioner=true in "old-k8s-version-20220601105850-6708"
	I0601 11:17:47.700729  254820 addons.go:153] Setting addon metrics-server=true in "old-k8s-version-20220601105850-6708"
	I0601 11:17:47.700730  254820 addons.go:153] Setting addon dashboard=true in "old-k8s-version-20220601105850-6708"
	W0601 11:17:47.700732  254820 addons.go:165] addon storage-provisioner should already be in state true
	W0601 11:17:47.700739  254820 addons.go:165] addon dashboard should already be in state true
	W0601 11:17:47.700738  254820 addons.go:165] addon metrics-server should already be in state true
	I0601 11:17:47.700777  254820 host.go:66] Checking if "old-k8s-version-20220601105850-6708" exists ...
	I0601 11:17:47.700784  254820 host.go:66] Checking if "old-k8s-version-20220601105850-6708" exists ...
	I0601 11:17:47.700790  254820 host.go:66] Checking if "old-k8s-version-20220601105850-6708" exists ...
	I0601 11:17:47.700709  254820 addons.go:65] Setting default-storageclass=true in profile "old-k8s-version-20220601105850-6708"
	I0601 11:17:47.700824  254820 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-20220601105850-6708"
	I0601 11:17:47.701173  254820 cli_runner.go:164] Run: docker container inspect old-k8s-version-20220601105850-6708 --format={{.State.Status}}
	I0601 11:17:47.701259  254820 cli_runner.go:164] Run: docker container inspect old-k8s-version-20220601105850-6708 --format={{.State.Status}}
	I0601 11:17:47.701279  254820 cli_runner.go:164] Run: docker container inspect old-k8s-version-20220601105850-6708 --format={{.State.Status}}
	I0601 11:17:47.701285  254820 cli_runner.go:164] Run: docker container inspect old-k8s-version-20220601105850-6708 --format={{.State.Status}}
	I0601 11:17:47.713330  254820 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-20220601105850-6708" to be "Ready" ...
	I0601 11:17:47.750554  254820 addons.go:153] Setting addon default-storageclass=true in "old-k8s-version-20220601105850-6708"
	W0601 11:17:47.750578  254820 addons.go:165] addon default-storageclass should already be in state true
	I0601 11:17:47.750614  254820 host.go:66] Checking if "old-k8s-version-20220601105850-6708" exists ...
	I0601 11:17:47.754763  254820 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0601 11:17:47.750975  254820 cli_runner.go:164] Run: docker container inspect old-k8s-version-20220601105850-6708 --format={{.State.Status}}
	I0601 11:17:47.758252  254820 out.go:177]   - Using image k8s.gcr.io/echoserver:1.4
	I0601 11:17:47.757064  254820 addons.go:348] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0601 11:17:47.758285  254820 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0601 11:17:47.758359  254820 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220601105850-6708
	I0601 11:17:47.759985  254820 out.go:177]   - Using image kubernetesui/dashboard:v2.5.1
	I0601 11:17:47.761242  254820 addons.go:348] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0601 11:17:47.761281  254820 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0601 11:17:47.761332  254820 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220601105850-6708
	I0601 11:17:47.764485  254820 out.go:177]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I0601 11:17:47.765882  254820 addons.go:348] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0601 11:17:47.765902  254820 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0601 11:17:47.765947  254820 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220601105850-6708
	I0601 11:17:47.809745  254820 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.58.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0601 11:17:47.813277  254820 addons.go:348] installing /etc/kubernetes/addons/storageclass.yaml
	I0601 11:17:47.813299  254820 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0601 11:17:47.813350  254820 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220601105850-6708
	I0601 11:17:47.814249  254820 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49422 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/old-k8s-version-20220601105850-6708/id_rsa Username:docker}
	I0601 11:17:47.830337  254820 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49422 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/old-k8s-version-20220601105850-6708/id_rsa Username:docker}
	I0601 11:17:47.832848  254820 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49422 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/old-k8s-version-20220601105850-6708/id_rsa Username:docker}
	I0601 11:17:47.854014  254820 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49422 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/old-k8s-version-20220601105850-6708/id_rsa Username:docker}
	I0601 11:17:47.968702  254820 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0601 11:17:47.969398  254820 addons.go:348] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0601 11:17:47.969416  254820 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1849 bytes)
	I0601 11:17:47.969640  254820 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0601 11:17:47.970189  254820 addons.go:348] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0601 11:17:47.970208  254820 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0601 11:17:48.055776  254820 addons.go:348] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0601 11:17:48.055807  254820 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0601 11:17:48.057898  254820 addons.go:348] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0601 11:17:48.057918  254820 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0601 11:17:48.072675  254820 addons.go:348] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0601 11:17:48.072702  254820 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0601 11:17:48.072788  254820 addons.go:348] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0601 11:17:48.072805  254820 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0601 11:17:48.154620  254820 addons.go:348] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0601 11:17:48.154657  254820 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4196 bytes)
	I0601 11:17:48.158011  254820 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0601 11:17:48.181023  254820 addons.go:348] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0601 11:17:48.181054  254820 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0601 11:17:48.271474  254820 addons.go:348] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0601 11:17:48.271501  254820 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0601 11:17:48.275017  254820 start.go:806] {"host.minikube.internal": 192.168.58.1} host record injected into CoreDNS
	I0601 11:17:48.360863  254820 addons.go:348] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0601 11:17:48.360891  254820 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0601 11:17:48.376549  254820 addons.go:348] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0601 11:17:48.376580  254820 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0601 11:17:48.392167  254820 addons.go:348] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0601 11:17:48.392196  254820 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0601 11:17:48.464169  254820 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0601 11:17:48.882430  254820 addons.go:386] Verifying addon metrics-server=true in "old-k8s-version-20220601105850-6708"
	I0601 11:17:49.264813  254820 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server, dashboard
	I0601 11:17:49.266465  254820 addons.go:417] enableAddons completed in 1.57041232s
	I0601 11:17:49.718664  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:17:51.719973  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:17:54.219149  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:17:56.719562  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:17:59.218935  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:18:01.719652  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:18:04.219204  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:18:06.719494  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:18:09.219176  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:18:11.718880  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:18:13.719312  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:18:15.719670  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:18:18.219172  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:18:20.219521  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:18:22.719227  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:18:24.719365  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:18:26.719411  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:18:29.218801  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:18:31.219603  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:18:33.719821  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:18:36.219334  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:18:38.219629  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:18:40.219897  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:18:42.719206  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:18:44.719361  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:18:46.719965  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:18:49.219161  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:18:51.719823  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:18:54.219442  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:18:56.719307  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:18:59.218862  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:19:01.219115  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:19:03.219470  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:19:05.719920  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:19:08.219261  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:19:10.719799  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:19:13.219313  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:19:15.220539  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:19:17.719072  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:19:19.719157  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:19:22.219444  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:19:24.718931  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:19:26.719409  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:19:28.719822  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:19:31.219776  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:19:33.719660  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:19:36.219188  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:19:38.219280  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:19:40.219509  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:19:42.219672  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:19:44.718913  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:19:46.719252  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:19:48.719471  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:19:50.719789  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:19:53.220276  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:19:55.719582  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:19:58.219607  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:20:00.719494  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:20:03.219487  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:20:05.719159  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:20:07.719735  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:20:10.219173  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:20:12.219371  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:20:14.719337  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:20:17.218953  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:20:19.718920  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:20:21.719314  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:20:23.719804  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:20:26.219205  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:20:28.219317  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:20:30.219646  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:20:32.719074  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:20:35.219473  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:20:37.719336  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:20:40.218932  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:20:42.719276  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:20:45.219350  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:20:47.719698  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:20:50.218967  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:20:52.219422  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:20:54.719514  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:20:57.219033  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:20:59.219067  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:21:01.219719  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:21:03.719181  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:21:05.719516  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:21:07.719966  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:21:10.218984  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:21:12.219075  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:21:14.219440  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:21:16.719363  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:21:19.218867  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:21:21.219185  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:21:23.719814  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:21:26.218905  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:21:28.219209  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:21:30.219534  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:21:32.219685  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:21:34.718805  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:21:36.719108  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:21:39.219402  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:21:41.718982  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:21:43.719227  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:21:45.719329  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:21:47.719480  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:21:47.721505  254820 node_ready.go:38] duration metric: took 4m0.008123732s waiting for node "old-k8s-version-20220601105850-6708" to be "Ready" ...
	I0601 11:21:47.723918  254820 out.go:177] 
	W0601 11:21:47.725406  254820 out.go:239] X Exiting due to GUEST_START: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: timed out waiting for the condition
	X Exiting due to GUEST_START: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: timed out waiting for the condition
	W0601 11:21:47.725423  254820 out.go:239] * 
	* 
	W0601 11:21:47.726098  254820 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0601 11:21:47.728001  254820 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:261: failed to start minikube post-stop. args "out/minikube-linux-amd64 start -p old-k8s-version-20220601105850-6708 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.16.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-20220601105850-6708
helpers_test.go:235: (dbg) docker inspect old-k8s-version-20220601105850-6708:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "3b070aceb31160fc5815dbff4454d13f7f86eaa9a885ff77acd0f313e36673c0",
	        "Created": "2022-06-01T10:59:00.78565124Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 255104,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-06-01T11:11:54.443188139Z",
	            "FinishedAt": "2022-06-01T11:11:52.690867678Z"
	        },
	        "Image": "sha256:5fc9565d342f677dd8987c0c7656d8d58147ab45c932c9076935b38a770e4cf7",
	        "ResolvConfPath": "/var/lib/docker/containers/3b070aceb31160fc5815dbff4454d13f7f86eaa9a885ff77acd0f313e36673c0/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/3b070aceb31160fc5815dbff4454d13f7f86eaa9a885ff77acd0f313e36673c0/hostname",
	        "HostsPath": "/var/lib/docker/containers/3b070aceb31160fc5815dbff4454d13f7f86eaa9a885ff77acd0f313e36673c0/hosts",
	        "LogPath": "/var/lib/docker/containers/3b070aceb31160fc5815dbff4454d13f7f86eaa9a885ff77acd0f313e36673c0/3b070aceb31160fc5815dbff4454d13f7f86eaa9a885ff77acd0f313e36673c0-json.log",
	        "Name": "/old-k8s-version-20220601105850-6708",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-20220601105850-6708:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-20220601105850-6708",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/54107ef3bba69957e2904f92616edbbee2b856262d277e35dd0855c862772266-init/diff:/var/lib/docker/overlay2/b8d8db1a2aa7e68bc30cc8784920412f67de75d287f26f41df14fe77e9cd01aa/diff:/var/lib/docker/overlay2/e05914bc392f34941f075ddf1f08984ba653e41980a3a12b3f0ec7bc66faed2c/diff:/var/lib/docker/overlay2/311290be1e68cfaf248422398f2ee037b662e83e8a80c393cf1f35af46c871e3/diff:/var/lib/docker/overlay2/df5bd86ed0e0c9740e1a389f0dd435837b730043ff123ee198aa28f82511f21c/diff:/var/lib/docker/overlay2/52ebf0baed5a1f4378d84fd6b94c9c1756fef7c7837d0766f77ef29b2104f39c/diff:/var/lib/docker/overlay2/19fdc64f1948c51309fb85761a84e7787a06f91f43e3d554c0507c5291bda802/diff:/var/lib/docker/overlay2/8fbe94a5f5ec7e7b87e3294d917aaba04c0d5a1072a4876ca3171f7b416e78d1/diff:/var/lib/docker/overlay2/3f2aec9d3b6e64c9251bd84e36639908a944a57741768984d5a66f5f34ed2288/diff:/var/lib/docker/overlay2/f4b59ae0a52d88a0325a281689c43f769408e14f70b9c94b771e70af140d7538/diff:/var/lib/docker/overlay2/1b9610
0ef86fc38fba7ce796c89f6a0f0c16c7fc1a94ff1a7f2021a01dd5471e/diff:/var/lib/docker/overlay2/10388fac28961ac0e67628b98602c8c77b04d12b21cd25a8d9adc05c1261252b/diff:/var/lib/docker/overlay2/efcc44ba0e0e6acd23e74db49106211785c2519f22b858252b43b17e927095e4/diff:/var/lib/docker/overlay2/3dbc9b9ec4e689c631fadb88ec67ad1f44f1059642f0d9e9740fa4523681476a/diff:/var/lib/docker/overlay2/196519dbdfaceb51fe77bd0517b4a726c63133e2f1a44cc71539d859f920996f/diff:/var/lib/docker/overlay2/bf269014ddb2ded291bc1f7a299a168567e8ffb016d7d4ba5ad3681eb1c988ba/diff:/var/lib/docker/overlay2/dc3a1c86dd14e2cba77aa4a9424d61aa34efc817c2c14b8f79d66fb1c19132ea/diff:/var/lib/docker/overlay2/7151cfa81a30a8e2bacf0e7e97eb21e825844e9ee99d4d774c443cf4df3042bf/diff:/var/lib/docker/overlay2/4827c7b9079e215b353e51dd539d7e28bf73341ea0a494df4654e9fd1b53d16c/diff:/var/lib/docker/overlay2/2da0eda04306aaeacd19a5cc85b885230b88e74f1791bdb022ebf4b39d85fae2/diff:/var/lib/docker/overlay2/1a0bdf8971fb0a44ff0e168623dbbb2d84b8e93f6d20d9351efd68470e4d4851/diff:/var/lib/d
ocker/overlay2/9ced6f582fc4ce00064f1f5e6ac922d838648fe94afc8e015c309e04004f10ca/diff:/var/lib/docker/overlay2/dd6d4c3166eb565aff2516953d5a8877a204214f2436d414475132ae70429cf7/diff:/var/lib/docker/overlay2/a1ace060e85891d54b26ff4be9fcbce36ffbb15cc4061eb4ccf0add8f82783df/diff:/var/lib/docker/overlay2/bc8b93bfba93e7da2c573ae8b6405ebff526153f6a8b0659aebaf044dc7e8f43/diff:/var/lib/docker/overlay2/c6292624b658b5761ddc277e4b50f1bd9d32fb9a2ad4a01647d6482fa0d29eb3/diff:/var/lib/docker/overlay2/cfe8f35eeb0a80a3c747eac0dfd9195da1fa3d9f92f0b7866d46f3517f3b10ee/diff:/var/lib/docker/overlay2/90bc3b9378b5761de7575f1a82d48e4b4ebf50af153eafa5a565585e136b87f8/diff:/var/lib/docker/overlay2/e1b928a2483870df7fdf4adb3b4002f9effe1db7fbff925b24005f47290b2915/diff:/var/lib/docker/overlay2/4758c75ab63fd3f43ae7b654bc93566ded69acea0e92caf3043ef6eeeec9ca1b/diff:/var/lib/docker/overlay2/8558da5809877030d87982052e5011fb04b8bb9646d0f3c1d4aa10f2d7926592/diff:/var/lib/docker/overlay2/f6400cfae811f55736f55064c8949e18ac2dc1175a5149bb0382a79fa92
4f3f3/diff:/var/lib/docker/overlay2/875d5278ff8445b84261d4586c6a14fbbd9c13beff1fe9252591162e4d91a0dc/diff:/var/lib/docker/overlay2/12d9f85a229e1386e37fb609020fdcb5535c67ce811012fd646182559e4ee754/diff:/var/lib/docker/overlay2/d5c5b85272d7a8b0b62da488c8cba8d883a0adcfc9f1b2f6ad2f856b4f13e5f7/diff:/var/lib/docker/overlay2/5f6feb9e059c22491e287804f38f097fda984dc9376ba28ae810e13dcf27394f/diff:/var/lib/docker/overlay2/113a715b1135f09b959295e960aeaa36846ad54a6fe46fdd53d061bc3fe114e3/diff:/var/lib/docker/overlay2/265096a57a98130b8073aa41d0a22f0ada5a391943e490ac1c281a634e22cba0/diff:/var/lib/docker/overlay2/15f57e6dc9a6b382240a9335aae067454048b2eb6d0094f2d3c8c115be34311a/diff:/var/lib/docker/overlay2/45ca8d44d46a41b4493f52c0048d08b8d5ff4c1b28233ab8940f6422df1f1273/diff:/var/lib/docker/overlay2/d555005a13c251ef928389a1b06d251a17378cf3ec68af5a4d6c849412d3f69f/diff:/var/lib/docker/overlay2/80b8c06850dacfe2ca4db4e0fa4d2d0dd6997cf495a7f7da9b95a996694144c5/diff",
	                "MergedDir": "/var/lib/docker/overlay2/54107ef3bba69957e2904f92616edbbee2b856262d277e35dd0855c862772266/merged",
	                "UpperDir": "/var/lib/docker/overlay2/54107ef3bba69957e2904f92616edbbee2b856262d277e35dd0855c862772266/diff",
	                "WorkDir": "/var/lib/docker/overlay2/54107ef3bba69957e2904f92616edbbee2b856262d277e35dd0855c862772266/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-20220601105850-6708",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-20220601105850-6708/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-20220601105850-6708",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-20220601105850-6708",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-20220601105850-6708",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "0960bb8f97b755414eac0338bfc1078877300285cb015d048bc6cd05ee3ed170",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49422"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49421"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49418"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49420"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49419"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/0960bb8f97b7",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-20220601105850-6708": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.58.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "3b070aceb311",
	                        "old-k8s-version-20220601105850-6708"
	                    ],
	                    "NetworkID": "99443bab5d3fa350d07dfff0b6c1624f2cd2601ac21b76ee77d57de53df02f62",
	                    "EndpointID": "74753f08c4bc626a78cf7d97ad5a40c516e6b8e6d55bde671c073b80db81c952",
	                    "Gateway": "192.168.58.1",
	                    "IPAddress": "192.168.58.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:3a:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-20220601105850-6708 -n old-k8s-version-20220601105850-6708
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-20220601105850-6708 logs -n 25
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/SecondStart logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------------------------------|------------------------------------------------|---------|----------------|---------------------|---------------------|
	| Command |                            Args                            |                    Profile                     |  User   |    Version     |     Start Time      |      End Time       |
	|---------|------------------------------------------------------------|------------------------------------------------|---------|----------------|---------------------|---------------------|
	| addons  | enable metrics-server -p                                   | old-k8s-version-20220601105850-6708            | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:11 UTC | 01 Jun 22 11:11 UTC |
	|         | old-k8s-version-20220601105850-6708                        |                                                |         |                |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                                                |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                     |                                                |         |                |                     |                     |
	| stop    | -p                                                         | old-k8s-version-20220601105850-6708            | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:11 UTC | 01 Jun 22 11:11 UTC |
	|         | old-k8s-version-20220601105850-6708                        |                                                |         |                |                     |                     |
	|         | --alsologtostderr -v=3                                     |                                                |         |                |                     |                     |
	| addons  | enable dashboard -p                                        | old-k8s-version-20220601105850-6708            | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:11 UTC | 01 Jun 22 11:11 UTC |
	|         | old-k8s-version-20220601105850-6708                        |                                                |         |                |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                |         |                |                     |                     |
	| logs    | calico-20220601104839-6708                                 | calico-20220601104839-6708                     | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:14 UTC | 01 Jun 22 11:14 UTC |
	|         | logs -n 25                                                 |                                                |         |                |                     |                     |
	| delete  | -p calico-20220601104839-6708                              | calico-20220601104839-6708                     | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:14 UTC | 01 Jun 22 11:14 UTC |
	| start   | -p newest-cni-20220601111420-6708 --memory=2200            | newest-cni-20220601111420-6708                 | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:14 UTC | 01 Jun 22 11:15 UTC |
	|         | --alsologtostderr --wait=apiserver,system_pods,default_sa  |                                                |         |                |                     |                     |
	|         | --feature-gates ServerSideApply=true --network-plugin=cni  |                                                |         |                |                     |                     |
	|         | --extra-config=kubelet.network-plugin=cni                  |                                                |         |                |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |                                                |         |                |                     |                     |
	|         | --driver=docker  --container-runtime=containerd            |                                                |         |                |                     |                     |
	|         | --kubernetes-version=v1.23.6                               |                                                |         |                |                     |                     |
	| addons  | enable metrics-server -p                                   | newest-cni-20220601111420-6708                 | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:15 UTC | 01 Jun 22 11:15 UTC |
	|         | newest-cni-20220601111420-6708                             |                                                |         |                |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                                                |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                     |                                                |         |                |                     |                     |
	| stop    | -p                                                         | newest-cni-20220601111420-6708                 | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:15 UTC | 01 Jun 22 11:15 UTC |
	|         | newest-cni-20220601111420-6708                             |                                                |         |                |                     |                     |
	|         | --alsologtostderr -v=3                                     |                                                |         |                |                     |                     |
	| addons  | enable dashboard -p                                        | newest-cni-20220601111420-6708                 | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:15 UTC | 01 Jun 22 11:15 UTC |
	|         | newest-cni-20220601111420-6708                             |                                                |         |                |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                |         |                |                     |                     |
	| start   | -p newest-cni-20220601111420-6708 --memory=2200            | newest-cni-20220601111420-6708                 | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:15 UTC | 01 Jun 22 11:15 UTC |
	|         | --alsologtostderr --wait=apiserver,system_pods,default_sa  |                                                |         |                |                     |                     |
	|         | --feature-gates ServerSideApply=true --network-plugin=cni  |                                                |         |                |                     |                     |
	|         | --extra-config=kubelet.network-plugin=cni                  |                                                |         |                |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |                                                |         |                |                     |                     |
	|         | --driver=docker  --container-runtime=containerd            |                                                |         |                |                     |                     |
	|         | --kubernetes-version=v1.23.6                               |                                                |         |                |                     |                     |
	| ssh     | -p                                                         | newest-cni-20220601111420-6708                 | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:15 UTC | 01 Jun 22 11:15 UTC |
	|         | newest-cni-20220601111420-6708                             |                                                |         |                |                     |                     |
	|         | sudo crictl images -o json                                 |                                                |         |                |                     |                     |
	| pause   | -p                                                         | newest-cni-20220601111420-6708                 | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:15 UTC | 01 Jun 22 11:15 UTC |
	|         | newest-cni-20220601111420-6708                             |                                                |         |                |                     |                     |
	|         | --alsologtostderr -v=1                                     |                                                |         |                |                     |                     |
	| unpause | -p                                                         | newest-cni-20220601111420-6708                 | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:15 UTC | 01 Jun 22 11:16 UTC |
	|         | newest-cni-20220601111420-6708                             |                                                |         |                |                     |                     |
	|         | --alsologtostderr -v=1                                     |                                                |         |                |                     |                     |
	| delete  | -p                                                         | newest-cni-20220601111420-6708                 | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:16 UTC | 01 Jun 22 11:16 UTC |
	|         | newest-cni-20220601111420-6708                             |                                                |         |                |                     |                     |
	| delete  | -p                                                         | newest-cni-20220601111420-6708                 | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:16 UTC | 01 Jun 22 11:16 UTC |
	|         | newest-cni-20220601111420-6708                             |                                                |         |                |                     |                     |
	| logs    | embed-certs-20220601110327-6708                            | embed-certs-20220601110327-6708                | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:16 UTC | 01 Jun 22 11:16 UTC |
	|         | logs -n 25                                                 |                                                |         |                |                     |                     |
	| logs    | embed-certs-20220601110327-6708                            | embed-certs-20220601110327-6708                | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:16 UTC | 01 Jun 22 11:16 UTC |
	|         | logs -n 25                                                 |                                                |         |                |                     |                     |
	| addons  | enable metrics-server -p                                   | embed-certs-20220601110327-6708                | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:16 UTC | 01 Jun 22 11:16 UTC |
	|         | embed-certs-20220601110327-6708                            |                                                |         |                |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                                                |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                     |                                                |         |                |                     |                     |
	| stop    | -p                                                         | embed-certs-20220601110327-6708                | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:16 UTC | 01 Jun 22 11:16 UTC |
	|         | embed-certs-20220601110327-6708                            |                                                |         |                |                     |                     |
	|         | --alsologtostderr -v=3                                     |                                                |         |                |                     |                     |
	| addons  | enable dashboard -p                                        | embed-certs-20220601110327-6708                | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:16 UTC | 01 Jun 22 11:16 UTC |
	|         | embed-certs-20220601110327-6708                            |                                                |         |                |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                |         |                |                     |                     |
	| logs    | default-k8s-different-port-20220601110654-6708             | default-k8s-different-port-20220601110654-6708 | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:19 UTC | 01 Jun 22 11:19 UTC |
	|         | logs -n 25                                                 |                                                |         |                |                     |                     |
	| logs    | default-k8s-different-port-20220601110654-6708             | default-k8s-different-port-20220601110654-6708 | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:19 UTC | 01 Jun 22 11:19 UTC |
	|         | logs -n 25                                                 |                                                |         |                |                     |                     |
	| addons  | enable metrics-server -p                                   | default-k8s-different-port-20220601110654-6708 | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:19 UTC | 01 Jun 22 11:19 UTC |
	|         | default-k8s-different-port-20220601110654-6708             |                                                |         |                |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                                                |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                     |                                                |         |                |                     |                     |
	| stop    | -p                                                         | default-k8s-different-port-20220601110654-6708 | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:19 UTC | 01 Jun 22 11:19 UTC |
	|         | default-k8s-different-port-20220601110654-6708             |                                                |         |                |                     |                     |
	|         | --alsologtostderr -v=3                                     |                                                |         |                |                     |                     |
	| addons  | enable dashboard -p                                        | default-k8s-different-port-20220601110654-6708 | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:19 UTC | 01 Jun 22 11:19 UTC |
	|         | default-k8s-different-port-20220601110654-6708             |                                                |         |                |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                |         |                |                     |                     |
	|---------|------------------------------------------------------------|------------------------------------------------|---------|----------------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/06/01 11:19:52
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.18.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0601 11:19:52.827023  276679 out.go:296] Setting OutFile to fd 1 ...
	I0601 11:19:52.827225  276679 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0601 11:19:52.827237  276679 out.go:309] Setting ErrFile to fd 2...
	I0601 11:19:52.827242  276679 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0601 11:19:52.827359  276679 root.go:322] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/bin
	I0601 11:19:52.827588  276679 out.go:303] Setting JSON to false
	I0601 11:19:52.828890  276679 start.go:115] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":3747,"bootTime":1654078646,"procs":456,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.13.0-1027-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0601 11:19:52.828955  276679 start.go:125] virtualization: kvm guest
	I0601 11:19:52.831944  276679 out.go:177] * [default-k8s-different-port-20220601110654-6708] minikube v1.26.0-beta.1 on Ubuntu 20.04 (kvm/amd64)
	I0601 11:19:52.833439  276679 out.go:177]   - MINIKUBE_LOCATION=14079
	I0601 11:19:52.833372  276679 notify.go:193] Checking for updates...
	I0601 11:19:52.835007  276679 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0601 11:19:52.836578  276679 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig
	I0601 11:19:52.837966  276679 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube
	I0601 11:19:52.839440  276679 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0601 11:19:52.841215  276679 config.go:178] Loaded profile config "default-k8s-different-port-20220601110654-6708": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.6
	I0601 11:19:52.841578  276679 driver.go:358] Setting default libvirt URI to qemu:///system
	I0601 11:19:52.880823  276679 docker.go:137] docker version: linux-20.10.16
	I0601 11:19:52.880897  276679 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0601 11:19:52.978177  276679 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:76 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:44 OomKillDisable:true NGoroutines:44 SystemTime:2022-06-01 11:19:52.908721136 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1027-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662795776 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:20.10.16 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0601 11:19:52.978275  276679 docker.go:254] overlay module found
	I0601 11:19:52.981078  276679 out.go:177] * Using the docker driver based on existing profile
	I0601 11:19:52.982316  276679 start.go:284] selected driver: docker
	I0601 11:19:52.982326  276679 start.go:806] validating driver "docker" against &{Name:default-k8s-different-port-20220601110654-6708 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:default-k8s-different-port-
20220601110654-6708 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8444 KubernetesVersion:v1.23.6 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.5.1@sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true
system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0601 11:19:52.982412  276679 start.go:817] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0601 11:19:52.983242  276679 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0601 11:19:53.085320  276679 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:76 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:44 OomKillDisable:true NGoroutines:44 SystemTime:2022-06-01 11:19:53.012439643 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1027-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662795776 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:20.10.16 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0601 11:19:53.085561  276679 start_flags.go:847] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0601 11:19:53.085581  276679 cni.go:95] Creating CNI manager for ""
	I0601 11:19:53.085589  276679 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0601 11:19:53.085608  276679 start_flags.go:306] config:
	{Name:default-k8s-different-port-20220601110654-6708 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:default-k8s-different-port-20220601110654-6708 Namespace:default APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8444 KubernetesVersion:v1.23.6 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.5.1@sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] Lis
tenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0601 11:19:53.088575  276679 out.go:177] * Starting control plane node default-k8s-different-port-20220601110654-6708 in cluster default-k8s-different-port-20220601110654-6708
	I0601 11:19:53.089964  276679 cache.go:120] Beginning downloading kic base image for docker with containerd
	I0601 11:19:53.091501  276679 out.go:177] * Pulling base image ...
	I0601 11:19:53.092800  276679 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime containerd
	I0601 11:19:53.092839  276679 preload.go:148] Found local preload: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.6-containerd-overlay2-amd64.tar.lz4
	I0601 11:19:53.092856  276679 cache.go:57] Caching tarball of preloaded images
	I0601 11:19:53.092897  276679 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a in local docker daemon
	I0601 11:19:53.093061  276679 preload.go:174] Found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.6-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0601 11:19:53.093076  276679 cache.go:60] Finished verifying existence of preloaded tar for  v1.23.6 on containerd
	I0601 11:19:53.093182  276679 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/default-k8s-different-port-20220601110654-6708/config.json ...
	I0601 11:19:53.136384  276679 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a in local docker daemon, skipping pull
	I0601 11:19:53.136410  276679 cache.go:141] gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a exists in daemon, skipping load
	I0601 11:19:53.136424  276679 cache.go:206] Successfully downloaded all kic artifacts
	I0601 11:19:53.136454  276679 start.go:352] acquiring machines lock for default-k8s-different-port-20220601110654-6708: {Name:mk7500f636009412c286b3a5b3a2182fb6b229b2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0601 11:19:53.136550  276679 start.go:356] acquired machines lock for "default-k8s-different-port-20220601110654-6708" in 69.025µs
	I0601 11:19:53.136570  276679 start.go:94] Skipping create...Using existing machine configuration
	I0601 11:19:53.136577  276679 fix.go:55] fixHost starting: 
	I0601 11:19:53.137208  276679 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220601110654-6708 --format={{.State.Status}}
	I0601 11:19:53.168642  276679 fix.go:103] recreateIfNeeded on default-k8s-different-port-20220601110654-6708: state=Stopped err=<nil>
	W0601 11:19:53.168681  276679 fix.go:129] unexpected machine state, will restart: <nil>
	I0601 11:19:53.170972  276679 out.go:177] * Restarting existing docker container for "default-k8s-different-port-20220601110654-6708" ...
	I0601 11:19:50.719789  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:19:53.220276  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:19:53.243194  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:19:55.243470  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:19:53.172500  276679 cli_runner.go:164] Run: docker start default-k8s-different-port-20220601110654-6708
	I0601 11:19:53.580842  276679 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220601110654-6708 --format={{.State.Status}}
	I0601 11:19:53.615796  276679 kic.go:416] container "default-k8s-different-port-20220601110654-6708" state is running.
	I0601 11:19:53.616193  276679 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-different-port-20220601110654-6708
	I0601 11:19:53.647308  276679 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/default-k8s-different-port-20220601110654-6708/config.json ...
	I0601 11:19:53.647503  276679 machine.go:88] provisioning docker machine ...
	I0601 11:19:53.647526  276679 ubuntu.go:169] provisioning hostname "default-k8s-different-port-20220601110654-6708"
	I0601 11:19:53.647560  276679 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220601110654-6708
	I0601 11:19:53.679842  276679 main.go:134] libmachine: Using SSH client type: native
	I0601 11:19:53.680106  276679 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7da240] 0x7dd2a0 <nil>  [] 0s} 127.0.0.1 49442 <nil> <nil>}
	I0601 11:19:53.680131  276679 main.go:134] libmachine: About to run SSH command:
	sudo hostname default-k8s-different-port-20220601110654-6708 && echo "default-k8s-different-port-20220601110654-6708" | sudo tee /etc/hostname
	I0601 11:19:53.680742  276679 main.go:134] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:55946->127.0.0.1:49442: read: connection reset by peer
	I0601 11:19:56.807880  276679 main.go:134] libmachine: SSH cmd err, output: <nil>: default-k8s-different-port-20220601110654-6708
	
	I0601 11:19:56.807951  276679 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220601110654-6708
	I0601 11:19:56.839321  276679 main.go:134] libmachine: Using SSH client type: native
	I0601 11:19:56.839475  276679 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7da240] 0x7dd2a0 <nil>  [] 0s} 127.0.0.1 49442 <nil> <nil>}
	I0601 11:19:56.839510  276679 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-different-port-20220601110654-6708' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-different-port-20220601110654-6708/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-different-port-20220601110654-6708' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0601 11:19:56.951445  276679 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0601 11:19:56.951473  276679 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/c
erts/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube}
	I0601 11:19:56.951491  276679 ubuntu.go:177] setting up certificates
	I0601 11:19:56.951499  276679 provision.go:83] configureAuth start
	I0601 11:19:56.951539  276679 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-different-port-20220601110654-6708
	I0601 11:19:56.982392  276679 provision.go:138] copyHostCerts
	I0601 11:19:56.982451  276679 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.pem, removing ...
	I0601 11:19:56.982464  276679 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.pem
	I0601 11:19:56.982537  276679 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.pem (1082 bytes)
	I0601 11:19:56.982652  276679 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cert.pem, removing ...
	I0601 11:19:56.982664  276679 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cert.pem
	I0601 11:19:56.982697  276679 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cert.pem (1123 bytes)
	I0601 11:19:56.982789  276679 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/key.pem, removing ...
	I0601 11:19:56.982802  276679 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/key.pem
	I0601 11:19:56.982829  276679 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/key.pem (1679 bytes)
	I0601 11:19:56.982876  276679 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca-key.pem org=jenkins.default-k8s-different-port-20220601110654-6708 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube default-k8s-different-port-20220601110654-6708]
	I0601 11:19:57.067574  276679 provision.go:172] copyRemoteCerts
	I0601 11:19:57.067626  276679 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0601 11:19:57.067654  276679 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220601110654-6708
	I0601 11:19:57.098669  276679 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49442 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/default-k8s-different-port-20220601110654-6708/id_rsa Username:docker}
	I0601 11:19:57.182904  276679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0601 11:19:57.199734  276679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/server.pem --> /etc/docker/server.pem (1306 bytes)
	I0601 11:19:57.215838  276679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0601 11:19:57.232284  276679 provision.go:86] duration metric: configureAuth took 280.774927ms
	I0601 11:19:57.232312  276679 ubuntu.go:193] setting minikube options for container-runtime
	I0601 11:19:57.232468  276679 config.go:178] Loaded profile config "default-k8s-different-port-20220601110654-6708": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.6
	I0601 11:19:57.232480  276679 machine.go:91] provisioned docker machine in 3.584963826s
	I0601 11:19:57.232486  276679 start.go:306] post-start starting for "default-k8s-different-port-20220601110654-6708" (driver="docker")
	I0601 11:19:57.232492  276679 start.go:316] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0601 11:19:57.232530  276679 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0601 11:19:57.232572  276679 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220601110654-6708
	I0601 11:19:57.265048  276679 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49442 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/default-k8s-different-port-20220601110654-6708/id_rsa Username:docker}
	I0601 11:19:57.351029  276679 ssh_runner.go:195] Run: cat /etc/os-release
	I0601 11:19:57.353646  276679 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0601 11:19:57.353677  276679 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0601 11:19:57.353687  276679 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0601 11:19:57.353695  276679 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0601 11:19:57.353706  276679 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/addons for local assets ...
	I0601 11:19:57.353765  276679 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files for local assets ...
	I0601 11:19:57.353858  276679 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files/etc/ssl/certs/67082.pem -> 67082.pem in /etc/ssl/certs
	I0601 11:19:57.353951  276679 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0601 11:19:57.360153  276679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files/etc/ssl/certs/67082.pem --> /etc/ssl/certs/67082.pem (1708 bytes)
	I0601 11:19:57.376881  276679 start.go:309] post-start completed in 144.384989ms
	I0601 11:19:57.376932  276679 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0601 11:19:57.376962  276679 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220601110654-6708
	I0601 11:19:57.411118  276679 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49442 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/default-k8s-different-port-20220601110654-6708/id_rsa Username:docker}
	I0601 11:19:57.496188  276679 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0601 11:19:57.499982  276679 fix.go:57] fixHost completed within 4.363400058s
	I0601 11:19:57.500005  276679 start.go:81] releasing machines lock for "default-k8s-different-port-20220601110654-6708", held for 4.363442227s
	I0601 11:19:57.500082  276679 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-different-port-20220601110654-6708
	I0601 11:19:57.532057  276679 ssh_runner.go:195] Run: systemctl --version
	I0601 11:19:57.532107  276679 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220601110654-6708
	I0601 11:19:57.532107  276679 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0601 11:19:57.532168  276679 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220601110654-6708
	I0601 11:19:57.567039  276679 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49442 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/default-k8s-different-port-20220601110654-6708/id_rsa Username:docker}
	I0601 11:19:57.567550  276679 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49442 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/default-k8s-different-port-20220601110654-6708/id_rsa Username:docker}
	I0601 11:19:57.677865  276679 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0601 11:19:57.688848  276679 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0601 11:19:57.697588  276679 docker.go:187] disabling docker service ...
	I0601 11:19:57.697632  276679 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0601 11:19:57.706476  276679 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0601 11:19:57.714826  276679 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0601 11:19:57.791919  276679 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0601 11:19:55.719582  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:19:58.219607  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:19:57.743387  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:20:00.243011  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:19:57.865357  276679 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0601 11:19:57.874183  276679 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0601 11:19:57.886120  276679 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*sandbox_image = .*$|sandbox_image = "k8s.gcr.io/pause:3.6"|' -i /etc/containerd/config.toml"
	I0601 11:19:57.893706  276679 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*restrict_oom_score_adj = .*$|restrict_oom_score_adj = false|' -i /etc/containerd/config.toml"
	I0601 11:19:57.901159  276679 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*SystemdCgroup = .*$|SystemdCgroup = false|' -i /etc/containerd/config.toml"
	I0601 11:19:57.908873  276679 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*conf_dir = .*$|conf_dir = "/etc/cni/net.mk"|' -i /etc/containerd/config.toml"
	I0601 11:19:57.916512  276679 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^# imports|imports = ["/etc/containerd/containerd.conf.d/02-containerd.conf"]|' -i /etc/containerd/config.toml"
	I0601 11:19:57.923712  276679 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc/containerd/containerd.conf.d && printf %!s(MISSING) "dmVyc2lvbiA9IDIK" | base64 -d | sudo tee /etc/containerd/containerd.conf.d/02-containerd.conf"
	I0601 11:19:57.935738  276679 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0601 11:19:57.941802  276679 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0601 11:19:57.947777  276679 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0601 11:19:58.021579  276679 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0601 11:19:58.089337  276679 start.go:447] Will wait 60s for socket path /run/containerd/containerd.sock
	I0601 11:19:58.089424  276679 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0601 11:19:58.092751  276679 start.go:468] Will wait 60s for crictl version
	I0601 11:19:58.092798  276679 ssh_runner.go:195] Run: sudo crictl version
	I0601 11:19:58.116611  276679 retry.go:31] will retry after 11.04660288s: Temporary Error: sudo crictl version: Process exited with status 1
	stdout:
	
	stderr:
	time="2022-06-01T11:19:58Z" level=fatal msg="getting the runtime version: rpc error: code = Unknown desc = server is not initialized yet"
	I0601 11:20:00.719494  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:20:03.219487  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:20:02.243060  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:20:04.243463  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:20:06.244423  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:20:05.719159  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:20:07.719735  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:20:09.163975  276679 ssh_runner.go:195] Run: sudo crictl version
	I0601 11:20:09.186613  276679 start.go:477] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.6.4
	RuntimeApiVersion:  v1alpha2
	I0601 11:20:09.186676  276679 ssh_runner.go:195] Run: containerd --version
	I0601 11:20:09.214385  276679 ssh_runner.go:195] Run: containerd --version
	I0601 11:20:09.243587  276679 out.go:177] * Preparing Kubernetes v1.23.6 on containerd 1.6.4 ...
	I0601 11:20:09.245245  276679 cli_runner.go:164] Run: docker network inspect default-k8s-different-port-20220601110654-6708 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0601 11:20:09.276501  276679 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0601 11:20:09.279800  276679 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0601 11:20:09.290992  276679 out.go:177]   - kubelet.cni-conf-dir=/etc/cni/net.mk
	I0601 11:20:08.742836  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:20:11.242670  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:20:09.292426  276679 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime containerd
	I0601 11:20:09.292493  276679 ssh_runner.go:195] Run: sudo crictl images --output json
	I0601 11:20:09.315170  276679 containerd.go:547] all images are preloaded for containerd runtime.
	I0601 11:20:09.315189  276679 containerd.go:461] Images already preloaded, skipping extraction
	I0601 11:20:09.315224  276679 ssh_runner.go:195] Run: sudo crictl images --output json
	I0601 11:20:09.338119  276679 containerd.go:547] all images are preloaded for containerd runtime.
	I0601 11:20:09.338137  276679 cache_images.go:84] Images are preloaded, skipping loading
	I0601 11:20:09.338184  276679 ssh_runner.go:195] Run: sudo crictl info
	I0601 11:20:09.360773  276679 cni.go:95] Creating CNI manager for ""
	I0601 11:20:09.360799  276679 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0601 11:20:09.360817  276679 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0601 11:20:09.360831  276679 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8444 KubernetesVersion:v1.23.6 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-different-port-20220601110654-6708 NodeName:default-k8s-different-port-20220601110654-6708 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.49.2
CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0601 11:20:09.360999  276679 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "default-k8s-different-port-20220601110654-6708"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.23.6
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0601 11:20:09.361105  276679 kubeadm.go:961] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.23.6/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cni-conf-dir=/etc/cni/net.mk --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=default-k8s-different-port-20220601110654-6708 --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.49.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.23.6 ClusterName:default-k8s-different-port-20220601110654-6708 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:}
	I0601 11:20:09.361162  276679 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.23.6
	I0601 11:20:09.368101  276679 binaries.go:44] Found k8s binaries, skipping transfer
	I0601 11:20:09.368169  276679 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0601 11:20:09.374382  276679 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (591 bytes)
	I0601 11:20:09.386282  276679 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0601 11:20:09.398188  276679 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2075 bytes)
	I0601 11:20:09.409736  276679 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0601 11:20:09.412458  276679 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0601 11:20:09.420789  276679 certs.go:54] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/default-k8s-different-port-20220601110654-6708 for IP: 192.168.49.2
	I0601 11:20:09.420897  276679 certs.go:182] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.key
	I0601 11:20:09.420940  276679 certs.go:182] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/proxy-client-ca.key
	I0601 11:20:09.421000  276679 certs.go:298] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/default-k8s-different-port-20220601110654-6708/client.key
	I0601 11:20:09.421053  276679 certs.go:298] skipping minikube signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/default-k8s-different-port-20220601110654-6708/apiserver.key.dd3b5fb2
	I0601 11:20:09.421088  276679 certs.go:298] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/default-k8s-different-port-20220601110654-6708/proxy-client.key
	I0601 11:20:09.421176  276679 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/6708.pem (1338 bytes)
	W0601 11:20:09.421205  276679 certs.go:384] ignoring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/6708_empty.pem, impossibly tiny 0 bytes
	I0601 11:20:09.421216  276679 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca-key.pem (1679 bytes)
	I0601 11:20:09.421244  276679 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca.pem (1082 bytes)
	I0601 11:20:09.421270  276679 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/cert.pem (1123 bytes)
	I0601 11:20:09.421298  276679 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/key.pem (1679 bytes)
	I0601 11:20:09.421334  276679 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files/etc/ssl/certs/67082.pem (1708 bytes)
	I0601 11:20:09.421917  276679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/default-k8s-different-port-20220601110654-6708/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0601 11:20:09.438490  276679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/default-k8s-different-port-20220601110654-6708/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0601 11:20:09.454711  276679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/default-k8s-different-port-20220601110654-6708/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0601 11:20:09.471469  276679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/default-k8s-different-port-20220601110654-6708/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0601 11:20:09.488271  276679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0601 11:20:09.504375  276679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0601 11:20:09.520473  276679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0601 11:20:09.536663  276679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0601 11:20:09.552725  276679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0601 11:20:09.568724  276679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/6708.pem --> /usr/share/ca-certificates/6708.pem (1338 bytes)
	I0601 11:20:09.584711  276679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files/etc/ssl/certs/67082.pem --> /usr/share/ca-certificates/67082.pem (1708 bytes)
	I0601 11:20:09.600406  276679 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0601 11:20:09.611814  276679 ssh_runner.go:195] Run: openssl version
	I0601 11:20:09.616280  276679 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0601 11:20:09.623058  276679 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0601 11:20:09.625881  276679 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Jun  1 10:20 /usr/share/ca-certificates/minikubeCA.pem
	I0601 11:20:09.625913  276679 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0601 11:20:09.630367  276679 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0601 11:20:09.636712  276679 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6708.pem && ln -fs /usr/share/ca-certificates/6708.pem /etc/ssl/certs/6708.pem"
	I0601 11:20:09.643407  276679 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6708.pem
	I0601 11:20:09.646316  276679 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Jun  1 10:24 /usr/share/ca-certificates/6708.pem
	I0601 11:20:09.646366  276679 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6708.pem
	I0601 11:20:09.650791  276679 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6708.pem /etc/ssl/certs/51391683.0"
	I0601 11:20:09.657126  276679 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/67082.pem && ln -fs /usr/share/ca-certificates/67082.pem /etc/ssl/certs/67082.pem"
	I0601 11:20:09.663990  276679 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/67082.pem
	I0601 11:20:09.666934  276679 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Jun  1 10:24 /usr/share/ca-certificates/67082.pem
	I0601 11:20:09.666966  276679 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/67082.pem
	I0601 11:20:09.671359  276679 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/67082.pem /etc/ssl/certs/3ec20f2e.0"
	I0601 11:20:09.677573  276679 kubeadm.go:395] StartCluster: {Name:default-k8s-different-port-20220601110654-6708 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:default-k8s-different-port-20220601110654-6708
Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8444 KubernetesVersion:v1.23.6 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.5.1@sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] S
tartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0601 11:20:09.677668  276679 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0601 11:20:09.677695  276679 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0601 11:20:09.700805  276679 cri.go:87] found id: "fec5fcb4aee4a111d94d448605edbdddd36bb33e1c2c06fad87be11f5157efdd"
	I0601 11:20:09.700825  276679 cri.go:87] found id: "313035e9674ffbf03bc7f81b4786fb43b0ffff7b5720ab0e88a6bb8f52d6087d"
	I0601 11:20:09.700835  276679 cri.go:87] found id: "f9746f111b56ac0244039e7b095ebf29999ccb20c1bf5e1ba484273bdb9e0d90"
	I0601 11:20:09.700844  276679 cri.go:87] found id: "0b15aeee4f5511a740a4f3f9ebbba9758b6b511b072da616c68a21c118c7790e"
	I0601 11:20:09.700853  276679 cri.go:87] found id: "627fd5c08820c1666f69a50ff3cc02a6eee73709048b46a0243143bf89dde787"
	I0601 11:20:09.700863  276679 cri.go:87] found id: "6ce85ae821e03fdd8bd07541eda8ea822ee62a21dc41c68bec58c5695d43fb44"
	I0601 11:20:09.700870  276679 cri.go:87] found id: ""
	I0601 11:20:09.700900  276679 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0601 11:20:09.711953  276679 cri.go:114] JSON = null
	W0601 11:20:09.711995  276679 kubeadm.go:402] unpause failed: list paused: list returned 0 containers, but ps returned 6
	I0601 11:20:09.712052  276679 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0601 11:20:09.718628  276679 kubeadm.go:410] found existing configuration files, will attempt cluster restart
	I0601 11:20:09.718649  276679 kubeadm.go:626] restartCluster start
	I0601 11:20:09.718687  276679 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0601 11:20:09.724992  276679 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0601 11:20:09.725567  276679 kubeconfig.go:116] verify returned: extract IP: "default-k8s-different-port-20220601110654-6708" does not appear in /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig
	I0601 11:20:09.725941  276679 kubeconfig.go:127] "default-k8s-different-port-20220601110654-6708" context is missing from /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig - will repair!
	I0601 11:20:09.726552  276679 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig: {Name:mkb0e16236e54fbc8651999f1dd70854c53de7db Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0601 11:20:09.727803  276679 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0601 11:20:09.734151  276679 api_server.go:165] Checking apiserver status ...
	I0601 11:20:09.734186  276679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 11:20:09.741699  276679 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 11:20:09.942065  276679 api_server.go:165] Checking apiserver status ...
	I0601 11:20:09.942125  276679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 11:20:09.950479  276679 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 11:20:10.142775  276679 api_server.go:165] Checking apiserver status ...
	I0601 11:20:10.142860  276679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 11:20:10.151184  276679 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 11:20:10.342428  276679 api_server.go:165] Checking apiserver status ...
	I0601 11:20:10.342511  276679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 11:20:10.350942  276679 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 11:20:10.542230  276679 api_server.go:165] Checking apiserver status ...
	I0601 11:20:10.542324  276679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 11:20:10.550731  276679 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 11:20:10.741765  276679 api_server.go:165] Checking apiserver status ...
	I0601 11:20:10.741840  276679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 11:20:10.750184  276679 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 11:20:10.942518  276679 api_server.go:165] Checking apiserver status ...
	I0601 11:20:10.942589  276679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 11:20:10.951137  276679 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 11:20:11.142442  276679 api_server.go:165] Checking apiserver status ...
	I0601 11:20:11.142519  276679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 11:20:11.151332  276679 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 11:20:11.342632  276679 api_server.go:165] Checking apiserver status ...
	I0601 11:20:11.342693  276679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 11:20:11.351149  276679 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 11:20:11.542423  276679 api_server.go:165] Checking apiserver status ...
	I0601 11:20:11.542483  276679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 11:20:11.550625  276679 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 11:20:11.741869  276679 api_server.go:165] Checking apiserver status ...
	I0601 11:20:11.741945  276679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 11:20:11.750554  276679 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 11:20:11.942776  276679 api_server.go:165] Checking apiserver status ...
	I0601 11:20:11.942855  276679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 11:20:11.951226  276679 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 11:20:12.142534  276679 api_server.go:165] Checking apiserver status ...
	I0601 11:20:12.142617  276679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 11:20:12.151065  276679 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 11:20:12.342354  276679 api_server.go:165] Checking apiserver status ...
	I0601 11:20:12.342429  276679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 11:20:12.350855  276679 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 11:20:12.542142  276679 api_server.go:165] Checking apiserver status ...
	I0601 11:20:12.542207  276679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 11:20:12.550615  276679 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 11:20:12.741824  276679 api_server.go:165] Checking apiserver status ...
	I0601 11:20:12.741894  276679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 11:20:12.750511  276679 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 11:20:12.750537  276679 api_server.go:165] Checking apiserver status ...
	I0601 11:20:12.750569  276679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 11:20:12.758099  276679 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 11:20:12.758124  276679 kubeadm.go:601] needs reconfigure: apiserver error: timed out waiting for the condition
	I0601 11:20:12.758131  276679 kubeadm.go:1092] stopping kube-system containers ...
	I0601 11:20:12.758146  276679 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I0601 11:20:12.758196  276679 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0601 11:20:12.782896  276679 cri.go:87] found id: "fec5fcb4aee4a111d94d448605edbdddd36bb33e1c2c06fad87be11f5157efdd"
	I0601 11:20:12.782918  276679 cri.go:87] found id: "313035e9674ffbf03bc7f81b4786fb43b0ffff7b5720ab0e88a6bb8f52d6087d"
	I0601 11:20:12.782924  276679 cri.go:87] found id: "f9746f111b56ac0244039e7b095ebf29999ccb20c1bf5e1ba484273bdb9e0d90"
	I0601 11:20:12.782931  276679 cri.go:87] found id: "0b15aeee4f5511a740a4f3f9ebbba9758b6b511b072da616c68a21c118c7790e"
	I0601 11:20:12.782936  276679 cri.go:87] found id: "627fd5c08820c1666f69a50ff3cc02a6eee73709048b46a0243143bf89dde787"
	I0601 11:20:12.782943  276679 cri.go:87] found id: "6ce85ae821e03fdd8bd07541eda8ea822ee62a21dc41c68bec58c5695d43fb44"
	I0601 11:20:12.782948  276679 cri.go:87] found id: ""
	I0601 11:20:12.782955  276679 cri.go:232] Stopping containers: [fec5fcb4aee4a111d94d448605edbdddd36bb33e1c2c06fad87be11f5157efdd 313035e9674ffbf03bc7f81b4786fb43b0ffff7b5720ab0e88a6bb8f52d6087d f9746f111b56ac0244039e7b095ebf29999ccb20c1bf5e1ba484273bdb9e0d90 0b15aeee4f5511a740a4f3f9ebbba9758b6b511b072da616c68a21c118c7790e 627fd5c08820c1666f69a50ff3cc02a6eee73709048b46a0243143bf89dde787 6ce85ae821e03fdd8bd07541eda8ea822ee62a21dc41c68bec58c5695d43fb44]
	I0601 11:20:12.782994  276679 ssh_runner.go:195] Run: which crictl
	I0601 11:20:12.785799  276679 ssh_runner.go:195] Run: sudo /usr/bin/crictl stop fec5fcb4aee4a111d94d448605edbdddd36bb33e1c2c06fad87be11f5157efdd 313035e9674ffbf03bc7f81b4786fb43b0ffff7b5720ab0e88a6bb8f52d6087d f9746f111b56ac0244039e7b095ebf29999ccb20c1bf5e1ba484273bdb9e0d90 0b15aeee4f5511a740a4f3f9ebbba9758b6b511b072da616c68a21c118c7790e 627fd5c08820c1666f69a50ff3cc02a6eee73709048b46a0243143bf89dde787 6ce85ae821e03fdd8bd07541eda8ea822ee62a21dc41c68bec58c5695d43fb44
	I0601 11:20:12.809504  276679 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0601 11:20:12.819061  276679 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0601 11:20:12.825913  276679 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5643 Jun  1 11:07 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5656 Jun  1 11:07 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2123 Jun  1 11:07 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5604 Jun  1 11:07 /etc/kubernetes/scheduler.conf
	
	I0601 11:20:12.825968  276679 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0601 11:20:10.219173  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:20:12.219371  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:20:13.243691  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:20:15.243798  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:20:12.832916  276679 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0601 11:20:12.839178  276679 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0601 11:20:12.845567  276679 kubeadm.go:166] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0601 11:20:12.845605  276679 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0601 11:20:12.851603  276679 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0601 11:20:12.857919  276679 kubeadm.go:166] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0601 11:20:12.857967  276679 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0601 11:20:12.864112  276679 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0601 11:20:12.870523  276679 kubeadm.go:703] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0601 11:20:12.870540  276679 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0601 11:20:12.912381  276679 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0601 11:20:13.433508  276679 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0601 11:20:13.566844  276679 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0601 11:20:13.617762  276679 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0601 11:20:13.686212  276679 api_server.go:51] waiting for apiserver process to appear ...
	I0601 11:20:13.686269  276679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:20:14.195273  276679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:20:14.695296  276679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:20:15.195457  276679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:20:15.695544  276679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:20:16.195542  276679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:20:16.695465  276679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:20:17.195333  276679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:20:17.694666  276679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:20:14.719337  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:20:17.218953  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:20:17.742741  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:20:20.244002  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:20:18.194692  276679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:20:18.694918  276679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:20:19.195623  276679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:20:19.695137  276679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:20:19.758656  276679 api_server.go:71] duration metric: took 6.072444993s to wait for apiserver process to appear ...
	I0601 11:20:19.758687  276679 api_server.go:87] waiting for apiserver healthz status ...
	I0601 11:20:19.758700  276679 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8444/healthz ...
	I0601 11:20:22.369047  276679 api_server.go:266] https://192.168.49.2:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0601 11:20:22.369078  276679 api_server.go:102] status: https://192.168.49.2:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0601 11:20:19.718920  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:20:21.719314  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:20:23.719804  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:20:22.869917  276679 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8444/healthz ...
	I0601 11:20:22.874561  276679 api_server.go:266] https://192.168.49.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0601 11:20:22.874589  276679 api_server.go:102] status: https://192.168.49.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0601 11:20:23.370203  276679 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8444/healthz ...
	I0601 11:20:23.375048  276679 api_server.go:266] https://192.168.49.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0601 11:20:23.375073  276679 api_server.go:102] status: https://192.168.49.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0601 11:20:23.869242  276679 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8444/healthz ...
	I0601 11:20:23.874012  276679 api_server.go:266] https://192.168.49.2:8444/healthz returned 200:
	ok
	I0601 11:20:23.879941  276679 api_server.go:140] control plane version: v1.23.6
	I0601 11:20:23.879963  276679 api_server.go:130] duration metric: took 4.121269797s to wait for apiserver health ...
	I0601 11:20:23.879972  276679 cni.go:95] Creating CNI manager for ""
	I0601 11:20:23.879977  276679 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0601 11:20:23.882052  276679 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0601 11:20:22.743507  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:20:25.242700  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:20:23.883460  276679 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0601 11:20:23.886921  276679 cni.go:189] applying CNI manifest using /var/lib/minikube/binaries/v1.23.6/kubectl ...
	I0601 11:20:23.886945  276679 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0601 11:20:23.899955  276679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0601 11:20:24.544438  276679 system_pods.go:43] waiting for kube-system pods to appear ...
	I0601 11:20:24.550979  276679 system_pods.go:59] 9 kube-system pods found
	I0601 11:20:24.551015  276679 system_pods.go:61] "coredns-64897985d-9gcj2" [28e98fca-a88b-422d-9f4b-797b18a8ff7a] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
	I0601 11:20:24.551025  276679 system_pods.go:61] "etcd-default-k8s-different-port-20220601110654-6708" [3005e651-1349-4d5e-b06f-e0fac3064ccf] Running
	I0601 11:20:24.551035  276679 system_pods.go:61] "kindnet-7fspq" [eefcd8e6-51e4-4d48-a420-93f4b47cf732] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0601 11:20:24.551042  276679 system_pods.go:61] "kube-apiserver-default-k8s-different-port-20220601110654-6708" [974fafdd-9176-4d97-acd7-9874d63b4987] Running
	I0601 11:20:24.551053  276679 system_pods.go:61] "kube-controller-manager-default-k8s-different-port-20220601110654-6708" [38b2c1a1-9a1a-4a1f-9fac-904e47d545be] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0601 11:20:24.551066  276679 system_pods.go:61] "kube-proxy-slzcl" [a1a6237f-6142-4e31-8bd4-72afd4f8a4c8] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0601 11:20:24.551083  276679 system_pods.go:61] "kube-scheduler-default-k8s-different-port-20220601110654-6708" [42ce6176-36e5-46bc-a443-19e4ca958785] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0601 11:20:24.551092  276679 system_pods.go:61] "metrics-server-b955d9d8-2k9wk" [fbc457b5-c359-4b84-abe5-d488874181f4] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
	I0601 11:20:24.551102  276679 system_pods.go:61] "storage-provisioner" [48086474-3417-47ff-970d-f7cf7806983b] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
	I0601 11:20:24.551112  276679 system_pods.go:74] duration metric: took 6.652373ms to wait for pod list to return data ...
	I0601 11:20:24.551126  276679 node_conditions.go:102] verifying NodePressure condition ...
	I0601 11:20:24.553819  276679 node_conditions.go:122] node storage ephemeral capacity is 304695084Ki
	I0601 11:20:24.553843  276679 node_conditions.go:123] node cpu capacity is 8
	I0601 11:20:24.553854  276679 node_conditions.go:105] duration metric: took 2.721044ms to run NodePressure ...
	I0601 11:20:24.553869  276679 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0601 11:20:24.680194  276679 kubeadm.go:762] waiting for restarted kubelet to initialise ...
	I0601 11:20:24.683686  276679 kubeadm.go:777] kubelet initialised
	I0601 11:20:24.683708  276679 kubeadm.go:778] duration metric: took 3.487172ms waiting for restarted kubelet to initialise ...
	I0601 11:20:24.683715  276679 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0601 11:20:24.689167  276679 pod_ready.go:78] waiting up to 4m0s for pod "coredns-64897985d-9gcj2" in "kube-system" namespace to be "Ready" ...
	I0601 11:20:26.694484  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:20:26.219205  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:20:28.219317  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:20:27.243486  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:20:29.742717  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:20:31.742800  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:20:28.695017  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:20:30.695110  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:20:32.695566  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:20:30.219646  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:20:32.719074  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:20:34.242643  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:20:36.243891  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:20:35.195305  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:20:37.197596  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:20:35.219473  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:20:37.719336  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:20:38.243963  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:20:40.743349  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:20:39.695270  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:20:42.195160  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:20:40.218932  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:20:42.719276  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:20:42.743398  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:20:45.243686  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:20:44.694661  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:20:46.695274  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:20:45.219350  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:20:47.719698  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:20:47.742813  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:20:50.244047  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:20:48.696514  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:20:51.195247  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:20:50.218967  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:20:52.219422  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:20:52.743394  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:20:54.743515  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:20:53.694370  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:20:55.694640  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:20:57.695171  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:20:54.719514  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:20:57.219033  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:20:57.242819  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:20:59.243739  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:20:59.739945  270029 pod_ready.go:81] duration metric: took 4m0.002166585s waiting for pod "coredns-64897985d-9dpfv" in "kube-system" namespace to be "Ready" ...
	E0601 11:20:59.739968  270029 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "coredns-64897985d-9dpfv" in "kube-system" namespace to be "Ready" (will not retry!)
	I0601 11:20:59.739995  270029 pod_ready.go:38] duration metric: took 4m0.008917217s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0601 11:20:59.740018  270029 kubeadm.go:630] restartCluster took 4m15.707393707s
	W0601 11:20:59.740131  270029 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0601 11:20:59.740156  270029 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I0601 11:21:01.430762  270029 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force": (1.690579833s)
	I0601 11:21:01.430838  270029 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0601 11:21:01.440364  270029 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0601 11:21:01.447145  270029 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0601 11:21:01.447194  270029 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0601 11:21:01.453852  270029 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0601 11:21:01.453891  270029 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0601 11:21:01.701224  270029 out.go:204]   - Generating certificates and keys ...
	I0601 11:21:00.194872  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:21:02.195437  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:20:59.219067  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:21:01.219719  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:21:03.719181  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:21:02.294583  270029 out.go:204]   - Booting up control plane ...
	I0601 11:21:04.694423  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:21:06.695087  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:21:05.719516  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:21:07.719966  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:21:09.195174  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:21:11.694583  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:21:10.218984  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:21:12.219075  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:21:14.337355  270029 out.go:204]   - Configuring RBAC rules ...
	I0601 11:21:14.750718  270029 cni.go:95] Creating CNI manager for ""
	I0601 11:21:14.750741  270029 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0601 11:21:14.752905  270029 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0601 11:21:14.754285  270029 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0601 11:21:14.758047  270029 cni.go:189] applying CNI manifest using /var/lib/minikube/binaries/v1.23.6/kubectl ...
	I0601 11:21:14.758065  270029 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0601 11:21:14.771201  270029 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0601 11:21:15.434277  270029 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0601 11:21:15.434380  270029 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:21:15.434381  270029 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl label nodes minikube.k8s.io/version=v1.26.0-beta.1 minikube.k8s.io/commit=4a356b2b7b41c6be3e1e342298908c27bb98ce92 minikube.k8s.io/name=embed-certs-20220601110327-6708 minikube.k8s.io/updated_at=2022_06_01T11_21_15_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:21:15.489119  270029 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:21:15.489208  270029 ops.go:34] apiserver oom_adj: -16
	I0601 11:21:16.079192  270029 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:21:16.579319  270029 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:21:14.194681  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:21:16.694557  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:21:14.219440  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:21:16.719363  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:21:17.079349  270029 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:21:17.579548  270029 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:21:18.079683  270029 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:21:18.579186  270029 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:21:19.079819  270029 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:21:19.579346  270029 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:21:20.079183  270029 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:21:20.579984  270029 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:21:21.079335  270029 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:21:21.579766  270029 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:21:18.694796  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:21:21.194627  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:21:19.218867  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:21:21.219185  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:21:23.719814  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:21:22.079321  270029 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:21:22.579993  270029 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:21:23.079856  270029 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:21:23.579743  270029 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:21:24.079256  270029 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:21:24.579276  270029 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:21:25.079828  270029 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:21:25.579763  270029 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:21:26.080068  270029 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:21:26.579388  270029 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:21:23.694527  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:21:25.694996  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:21:27.079269  270029 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:21:27.579729  270029 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:21:27.636171  270029 kubeadm.go:1045] duration metric: took 12.201851278s to wait for elevateKubeSystemPrivileges.
	I0601 11:21:27.636205  270029 kubeadm.go:397] StartCluster complete in 4m43.646757592s
	I0601 11:21:27.636227  270029 settings.go:142] acquiring lock: {Name:mk20a847233fa50399ab0a24280bffb8d8dbd41b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0601 11:21:27.636334  270029 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig
	I0601 11:21:27.637880  270029 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig: {Name:mkb0e16236e54fbc8651999f1dd70854c53de7db Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0601 11:21:28.157076  270029 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "embed-certs-20220601110327-6708" rescaled to 1
	I0601 11:21:28.157150  270029 start.go:208] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0601 11:21:28.157180  270029 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0601 11:21:28.159818  270029 out.go:177] * Verifying Kubernetes components...
	I0601 11:21:28.157185  270029 addons.go:415] enableAddons start: toEnable=map[dashboard:true metrics-server:true], additional=[]
	I0601 11:21:28.157406  270029 config.go:178] Loaded profile config "embed-certs-20220601110327-6708": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.6
	I0601 11:21:28.161484  270029 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0601 11:21:28.161496  270029 addons.go:65] Setting metrics-server=true in profile "embed-certs-20220601110327-6708"
	I0601 11:21:28.161511  270029 addons.go:153] Setting addon metrics-server=true in "embed-certs-20220601110327-6708"
	W0601 11:21:28.161523  270029 addons.go:165] addon metrics-server should already be in state true
	I0601 11:21:28.161537  270029 addons.go:65] Setting dashboard=true in profile "embed-certs-20220601110327-6708"
	I0601 11:21:28.161566  270029 addons.go:153] Setting addon dashboard=true in "embed-certs-20220601110327-6708"
	I0601 11:21:28.161573  270029 host.go:66] Checking if "embed-certs-20220601110327-6708" exists ...
	W0601 11:21:28.161579  270029 addons.go:165] addon dashboard should already be in state true
	I0601 11:21:28.161483  270029 addons.go:65] Setting storage-provisioner=true in profile "embed-certs-20220601110327-6708"
	I0601 11:21:28.161622  270029 addons.go:153] Setting addon storage-provisioner=true in "embed-certs-20220601110327-6708"
	W0601 11:21:28.161631  270029 addons.go:165] addon storage-provisioner should already be in state true
	I0601 11:21:28.161636  270029 host.go:66] Checking if "embed-certs-20220601110327-6708" exists ...
	I0601 11:21:28.161669  270029 host.go:66] Checking if "embed-certs-20220601110327-6708" exists ...
	I0601 11:21:28.161500  270029 addons.go:65] Setting default-storageclass=true in profile "embed-certs-20220601110327-6708"
	I0601 11:21:28.161709  270029 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-20220601110327-6708"
	I0601 11:21:28.161949  270029 cli_runner.go:164] Run: docker container inspect embed-certs-20220601110327-6708 --format={{.State.Status}}
	I0601 11:21:28.162094  270029 cli_runner.go:164] Run: docker container inspect embed-certs-20220601110327-6708 --format={{.State.Status}}
	I0601 11:21:28.162123  270029 cli_runner.go:164] Run: docker container inspect embed-certs-20220601110327-6708 --format={{.State.Status}}
	I0601 11:21:28.162229  270029 cli_runner.go:164] Run: docker container inspect embed-certs-20220601110327-6708 --format={{.State.Status}}
	I0601 11:21:28.209663  270029 out.go:177]   - Using image k8s.gcr.io/echoserver:1.4
	I0601 11:21:28.211523  270029 out.go:177]   - Using image kubernetesui/dashboard:v2.5.1
	I0601 11:21:28.213009  270029 addons.go:348] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0601 11:21:28.213030  270029 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0601 11:21:28.213079  270029 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220601110327-6708
	I0601 11:21:28.216922  270029 out.go:177]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I0601 11:21:28.218989  270029 addons.go:348] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0601 11:21:28.217201  270029 addons.go:153] Setting addon default-storageclass=true in "embed-certs-20220601110327-6708"
	W0601 11:21:28.219035  270029 addons.go:165] addon default-storageclass should already be in state true
	I0601 11:21:28.219075  270029 host.go:66] Checking if "embed-certs-20220601110327-6708" exists ...
	I0601 11:21:28.219579  270029 cli_runner.go:164] Run: docker container inspect embed-certs-20220601110327-6708 --format={{.State.Status}}
	I0601 11:21:28.219012  270029 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0601 11:21:28.219781  270029 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220601110327-6708
	I0601 11:21:28.236451  270029 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0601 11:21:26.218905  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:21:28.219209  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:21:28.238138  270029 addons.go:348] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0601 11:21:28.238163  270029 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0601 11:21:28.238217  270029 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220601110327-6708
	I0601 11:21:28.246850  270029 node_ready.go:35] waiting up to 6m0s for node "embed-certs-20220601110327-6708" to be "Ready" ...
	I0601 11:21:28.246885  270029 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0601 11:21:28.273680  270029 addons.go:348] installing /etc/kubernetes/addons/storageclass.yaml
	I0601 11:21:28.273707  270029 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0601 11:21:28.273761  270029 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220601110327-6708
	I0601 11:21:28.278846  270029 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49437 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/embed-certs-20220601110327-6708/id_rsa Username:docker}
	I0601 11:21:28.279320  270029 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49437 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/embed-certs-20220601110327-6708/id_rsa Username:docker}
	I0601 11:21:28.286384  270029 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49437 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/embed-certs-20220601110327-6708/id_rsa Username:docker}
	I0601 11:21:28.321729  270029 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49437 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/embed-certs-20220601110327-6708/id_rsa Username:docker}
	I0601 11:21:28.455756  270029 addons.go:348] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0601 11:21:28.455785  270029 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1849 bytes)
	I0601 11:21:28.466348  270029 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0601 11:21:28.469026  270029 addons.go:348] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0601 11:21:28.469067  270029 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0601 11:21:28.469486  270029 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0601 11:21:28.478076  270029 addons.go:348] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0601 11:21:28.478099  270029 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0601 11:21:28.487008  270029 addons.go:348] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0601 11:21:28.487036  270029 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0601 11:21:28.573106  270029 addons.go:348] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0601 11:21:28.573135  270029 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0601 11:21:28.574698  270029 start.go:806] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS
	I0601 11:21:28.577019  270029 addons.go:348] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0601 11:21:28.577042  270029 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0601 11:21:28.653936  270029 addons.go:348] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0601 11:21:28.653967  270029 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4196 bytes)
	I0601 11:21:28.658482  270029 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0601 11:21:28.671762  270029 addons.go:348] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0601 11:21:28.671808  270029 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0601 11:21:28.758424  270029 addons.go:348] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0601 11:21:28.758516  270029 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0601 11:21:28.776703  270029 addons.go:348] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0601 11:21:28.776735  270029 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0601 11:21:28.794636  270029 addons.go:348] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0601 11:21:28.794670  270029 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0601 11:21:28.959418  270029 addons.go:348] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0601 11:21:28.959449  270029 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0601 11:21:28.976465  270029 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0601 11:21:29.354605  270029 addons.go:386] Verifying addon metrics-server=true in "embed-certs-20220601110327-6708"
	I0601 11:21:29.699561  270029 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server, dashboard
	I0601 11:21:29.700807  270029 addons.go:417] enableAddons completed in 1.543631535s
	I0601 11:21:30.260215  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:21:28.196140  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:21:30.694688  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:21:32.695236  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:21:30.219534  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:21:32.219685  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:21:32.260412  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:21:34.760173  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:21:36.760442  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:21:35.195034  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:21:37.195304  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:21:34.718805  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:21:36.719108  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:21:38.760533  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:21:40.761060  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:21:39.694703  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:21:42.195994  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:21:39.219402  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:21:41.718982  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:21:43.719227  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:21:43.259684  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:21:45.260363  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:21:45.719329  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:21:47.719480  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:21:47.721505  254820 node_ready.go:38] duration metric: took 4m0.008123732s waiting for node "old-k8s-version-20220601105850-6708" to be "Ready" ...
	I0601 11:21:47.723918  254820 out.go:177] 
	W0601 11:21:47.725406  254820 out.go:239] X Exiting due to GUEST_START: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: timed out waiting for the condition
	W0601 11:21:47.725423  254820 out.go:239] * 
	W0601 11:21:47.726098  254820 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0601 11:21:47.728001  254820 out.go:177] 
	I0601 11:21:44.695306  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:21:47.194624  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID
	474a26b35c18b       6de166512aa22       About a minute ago   Running             kindnet-cni               1                   4390368520877
	310e21ce9d141       6de166512aa22       4 minutes ago        Exited              kindnet-cni               0                   4390368520877
	bba155e14e8f3       c21b0c7400f98       4 minutes ago        Running             kube-proxy                0                   2757059ae300e
	0347453bb77d9       06a629a7e51cd       4 minutes ago        Running             kube-controller-manager   0                   f76ee23e41e32
	c6dd696a23428       b305571ca60a5       4 minutes ago        Running             kube-apiserver            0                   f2e3ad18f3af9
	a946b8ec63ccd       301ddc62b80b1       4 minutes ago        Running             kube-scheduler            0                   b9bd728b9dde4
	c7d9c76499959       b2756210eeabf       4 minutes ago        Running             etcd                      0                   acf2412deefa0
	
	* 
	* ==> containerd <==
	* -- Logs begin at Wed 2022-06-01 11:11:54 UTC, end at Wed 2022-06-01 11:21:48 UTC. --
	Jun 01 11:17:47 old-k8s-version-20220601105850-6708 containerd[390]: time="2022-06-01T11:17:47.275266257Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 01 11:17:47 old-k8s-version-20220601105850-6708 containerd[390]: time="2022-06-01T11:17:47.275279972Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 01 11:17:47 old-k8s-version-20220601105850-6708 containerd[390]: time="2022-06-01T11:17:47.275705004Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/2757059ae300e4bbc94459d0120117a968b1adb8a2dadb74f55b3bdad076ce86 pid=3851 runtime=io.containerd.runc.v2
	Jun 01 11:17:47 old-k8s-version-20220601105850-6708 containerd[390]: time="2022-06-01T11:17:47.277436919Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 01 11:17:47 old-k8s-version-20220601105850-6708 containerd[390]: time="2022-06-01T11:17:47.277524637Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 01 11:17:47 old-k8s-version-20220601105850-6708 containerd[390]: time="2022-06-01T11:17:47.277539118Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 01 11:17:47 old-k8s-version-20220601105850-6708 containerd[390]: time="2022-06-01T11:17:47.277840680Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/43903685208773e46ae9179be445fb4b8907c2aeefa84be65aa89e4065b739f4 pid=3859 runtime=io.containerd.runc.v2
	Jun 01 11:17:47 old-k8s-version-20220601105850-6708 containerd[390]: time="2022-06-01T11:17:47.330312912Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-gh8fk,Uid:091898de-1c6a-40ef-a148-3acd0091efc8,Namespace:kube-system,Attempt:0,} returns sandbox id \"2757059ae300e4bbc94459d0120117a968b1adb8a2dadb74f55b3bdad076ce86\""
	Jun 01 11:17:47 old-k8s-version-20220601105850-6708 containerd[390]: time="2022-06-01T11:17:47.332824802Z" level=info msg="CreateContainer within sandbox \"2757059ae300e4bbc94459d0120117a968b1adb8a2dadb74f55b3bdad076ce86\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}"
	Jun 01 11:17:47 old-k8s-version-20220601105850-6708 containerd[390]: time="2022-06-01T11:17:47.345941452Z" level=info msg="CreateContainer within sandbox \"2757059ae300e4bbc94459d0120117a968b1adb8a2dadb74f55b3bdad076ce86\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"bba155e14e8f3aac6b7847d5dd32a5f7b82602b1afa57eb4054e328a8e89213d\""
	Jun 01 11:17:47 old-k8s-version-20220601105850-6708 containerd[390]: time="2022-06-01T11:17:47.346479989Z" level=info msg="StartContainer for \"bba155e14e8f3aac6b7847d5dd32a5f7b82602b1afa57eb4054e328a8e89213d\""
	Jun 01 11:17:47 old-k8s-version-20220601105850-6708 containerd[390]: time="2022-06-01T11:17:47.371922304Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kindnet-wnn66,Uid:655a68dd-59d6-46fa-9b98-018e0adc10d0,Namespace:kube-system,Attempt:0,} returns sandbox id \"43903685208773e46ae9179be445fb4b8907c2aeefa84be65aa89e4065b739f4\""
	Jun 01 11:17:47 old-k8s-version-20220601105850-6708 containerd[390]: time="2022-06-01T11:17:47.374622738Z" level=info msg="CreateContainer within sandbox \"43903685208773e46ae9179be445fb4b8907c2aeefa84be65aa89e4065b739f4\" for container &ContainerMetadata{Name:kindnet-cni,Attempt:0,}"
	Jun 01 11:17:47 old-k8s-version-20220601105850-6708 containerd[390]: time="2022-06-01T11:17:47.386274601Z" level=info msg="CreateContainer within sandbox \"43903685208773e46ae9179be445fb4b8907c2aeefa84be65aa89e4065b739f4\" for &ContainerMetadata{Name:kindnet-cni,Attempt:0,} returns container id \"310e21ce9d14163f7fa71a73d3372ad19670ad2c2044e502fc7e639d02e04aa5\""
	Jun 01 11:17:47 old-k8s-version-20220601105850-6708 containerd[390]: time="2022-06-01T11:17:47.386708166Z" level=info msg="StartContainer for \"310e21ce9d14163f7fa71a73d3372ad19670ad2c2044e502fc7e639d02e04aa5\""
	Jun 01 11:17:47 old-k8s-version-20220601105850-6708 containerd[390]: time="2022-06-01T11:17:47.422609660Z" level=info msg="StartContainer for \"bba155e14e8f3aac6b7847d5dd32a5f7b82602b1afa57eb4054e328a8e89213d\" returns successfully"
	Jun 01 11:17:47 old-k8s-version-20220601105850-6708 containerd[390]: time="2022-06-01T11:17:47.657509059Z" level=info msg="StartContainer for \"310e21ce9d14163f7fa71a73d3372ad19670ad2c2044e502fc7e639d02e04aa5\" returns successfully"
	Jun 01 11:20:27 old-k8s-version-20220601105850-6708 containerd[390]: time="2022-06-01T11:20:27.896214862Z" level=info msg="shim disconnected" id=310e21ce9d14163f7fa71a73d3372ad19670ad2c2044e502fc7e639d02e04aa5
	Jun 01 11:20:27 old-k8s-version-20220601105850-6708 containerd[390]: time="2022-06-01T11:20:27.896268294Z" level=warning msg="cleaning up after shim disconnected" id=310e21ce9d14163f7fa71a73d3372ad19670ad2c2044e502fc7e639d02e04aa5 namespace=k8s.io
	Jun 01 11:20:27 old-k8s-version-20220601105850-6708 containerd[390]: time="2022-06-01T11:20:27.896281227Z" level=info msg="cleaning up dead shim"
	Jun 01 11:20:27 old-k8s-version-20220601105850-6708 containerd[390]: time="2022-06-01T11:20:27.905072167Z" level=warning msg="cleanup warnings time=\"2022-06-01T11:20:27Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4686 runtime=io.containerd.runc.v2\n"
	Jun 01 11:20:28 old-k8s-version-20220601105850-6708 containerd[390]: time="2022-06-01T11:20:28.386931212Z" level=info msg="CreateContainer within sandbox \"43903685208773e46ae9179be445fb4b8907c2aeefa84be65aa89e4065b739f4\" for container &ContainerMetadata{Name:kindnet-cni,Attempt:1,}"
	Jun 01 11:20:28 old-k8s-version-20220601105850-6708 containerd[390]: time="2022-06-01T11:20:28.399585105Z" level=info msg="CreateContainer within sandbox \"43903685208773e46ae9179be445fb4b8907c2aeefa84be65aa89e4065b739f4\" for &ContainerMetadata{Name:kindnet-cni,Attempt:1,} returns container id \"474a26b35c18b4257bbdf87dafc02876c3cbe21ebd72bf6427072e27c0acb83b\""
	Jun 01 11:20:28 old-k8s-version-20220601105850-6708 containerd[390]: time="2022-06-01T11:20:28.400091898Z" level=info msg="StartContainer for \"474a26b35c18b4257bbdf87dafc02876c3cbe21ebd72bf6427072e27c0acb83b\""
	Jun 01 11:20:28 old-k8s-version-20220601105850-6708 containerd[390]: time="2022-06-01T11:20:28.557935713Z" level=info msg="StartContainer for \"474a26b35c18b4257bbdf87dafc02876c3cbe21ebd72bf6427072e27c0acb83b\" returns successfully"
	
	* 
	* ==> describe nodes <==
	* Name:               old-k8s-version-20220601105850-6708
	Roles:              master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-20220601105850-6708
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=4a356b2b7b41c6be3e1e342298908c27bb98ce92
	                    minikube.k8s.io/name=old-k8s-version-20220601105850-6708
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2022_06_01T11_17_32_0700
	                    minikube.k8s.io/version=v1.26.0-beta.1
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 01 Jun 2022 11:17:27 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 01 Jun 2022 11:21:27 +0000   Wed, 01 Jun 2022 11:17:24 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 01 Jun 2022 11:21:27 +0000   Wed, 01 Jun 2022 11:17:24 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 01 Jun 2022 11:21:27 +0000   Wed, 01 Jun 2022 11:17:24 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Wed, 01 Jun 2022 11:21:27 +0000   Wed, 01 Jun 2022 11:17:24 +0000   KubeletNotReady              runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Addresses:
	  InternalIP:  192.168.58.2
	  Hostname:    old-k8s-version-20220601105850-6708
	Capacity:
	 cpu:                8
	 ephemeral-storage:  304695084Ki
	 hugepages-1Gi:      0
	 hugepages-2Mi:      0
	 memory:             32873824Ki
	 pods:               110
	Allocatable:
	 cpu:                8
	 ephemeral-storage:  304695084Ki
	 hugepages-1Gi:      0
	 hugepages-2Mi:      0
	 memory:             32873824Ki
	 pods:               110
	System Info:
	 Machine ID:                 e0d7477b601740b2a7c32c13851e505c
	 System UUID:                cf752223-716a-46c7-b06a-74cba9af00dc
	 Boot ID:                    6ec46557-2d76-4d83-8353-3ee04fd961c4
	 Kernel Version:             5.13.0-1027-gcp
	 OS Image:                   Ubuntu 20.04.4 LTS
	 Operating System:           linux
	 Architecture:               amd64
	 Container Runtime Version:  containerd://1.6.4
	 Kubelet Version:            v1.16.0
	 Kube-Proxy Version:         v1.16.0
	PodCIDR:                     10.244.0.0/24
	PodCIDRs:                    10.244.0.0/24
	Non-terminated Pods:         (6 in total)
	  Namespace                  Name                                                           CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                  ----                                                           ------------  ----------  ---------------  -------------  ---
	  kube-system                etcd-old-k8s-version-20220601105850-6708                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m4s
	  kube-system                kindnet-wnn66                                                  100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      4m2s
	  kube-system                kube-apiserver-old-k8s-version-20220601105850-6708             250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m58s
	  kube-system                kube-controller-manager-old-k8s-version-20220601105850-6708    200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m20s
	  kube-system                kube-proxy-gh8fk                                               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m2s
	  kube-system                kube-scheduler-old-k8s-version-20220601105850-6708             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m55s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                650m (8%!)(MISSING)  100m (1%!)(MISSING)
	  memory             50Mi (0%!)(MISSING)  50Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From                                             Message
	  ----    ------                   ----                   ----                                             -------
	  Normal  NodeHasSufficientMemory  4m25s (x8 over 4m25s)  kubelet, old-k8s-version-20220601105850-6708     Node old-k8s-version-20220601105850-6708 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m25s (x8 over 4m25s)  kubelet, old-k8s-version-20220601105850-6708     Node old-k8s-version-20220601105850-6708 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m25s (x7 over 4m25s)  kubelet, old-k8s-version-20220601105850-6708     Node old-k8s-version-20220601105850-6708 status is now: NodeHasSufficientPID
	  Normal  Starting                 4m1s                   kube-proxy, old-k8s-version-20220601105850-6708  Starting kube-proxy.
	
	* 
	* ==> dmesg <==
	* [Jun 1 10:57] IPv4: martian source 10.244.0.2 from 10.244.0.2, on dev bridge
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff ce 23 50 3b c9 fd 08 06
	[  +0.595787] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ce 23 50 3b c9 fd 08 06
	[  +0.586201] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 66 8a d2 ee 2a 9a 08 06
	[Jun 1 10:58] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 9a c7 83 3b 8c 1d 08 06
	[  +0.000302] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 66 8a d2 ee 2a 9a 08 06
	[ +12.725641] IPv4: martian source 10.244.0.2 from 10.244.0.2, on dev bridge
	[  +0.000013] ll header: 00000000: ff ff ff ff ff ff 3a 82 da d0 8c 8d 08 06
	[  +0.906380] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 3a 82 da d0 8c 8d 08 06
	[  +0.302806] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff aa d1 05 1a ef 66 08 06
	[ +13.606547] IPv4: martian source 10.85.0.2 from 10.85.0.2, on dev cni0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 7e ad f3 21 27 5b 08 06
	[  +8.626375] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff f6 c1 ef be 01 dd 08 06
	[  +0.000348] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff aa d1 05 1a ef 66 08 06
	[  +8.278909] IPv4: martian source 10.85.0.2 from 10.85.0.2, on dev cni0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 72 fb cd eb 09 11 08 06
	[Jun 1 11:01] IPv4: martian destination 127.0.0.11 from 10.244.0.2, dev vethb04b9442
	
	* 
	* ==> etcd [c7d9c7649995996591a343170bd6f7b866e1d7a5c3c4c910856af8592831e768] <==
	* 2022-06-01 11:17:23.761687 I | etcdserver: initial cluster = old-k8s-version-20220601105850-6708=https://192.168.58.2:2380
	2022-06-01 11:17:23.765662 I | etcdserver: starting member b2c6679ac05f2cf1 in cluster 3a56e4ca95e2355c
	2022-06-01 11:17:23.765693 I | raft: b2c6679ac05f2cf1 became follower at term 0
	2022-06-01 11:17:23.765701 I | raft: newRaft b2c6679ac05f2cf1 [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0]
	2022-06-01 11:17:23.765706 I | raft: b2c6679ac05f2cf1 became follower at term 1
	2022-06-01 11:17:23.770216 W | auth: simple token is not cryptographically signed
	2022-06-01 11:17:23.773114 I | etcdserver: starting server... [version: 3.3.15, cluster version: to_be_decided]
	2022-06-01 11:17:23.775073 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, ca = , trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	2022-06-01 11:17:23.775389 I | embed: listening for metrics on http://192.168.58.2:2381
	2022-06-01 11:17:23.775515 I | embed: listening for metrics on http://127.0.0.1:2381
	2022-06-01 11:17:23.775709 I | etcdserver: b2c6679ac05f2cf1 as single-node; fast-forwarding 9 ticks (election ticks 10)
	2022-06-01 11:17:23.775847 I | etcdserver/membership: added member b2c6679ac05f2cf1 [https://192.168.58.2:2380] to cluster 3a56e4ca95e2355c
	2022-06-01 11:17:24.666034 I | raft: b2c6679ac05f2cf1 is starting a new election at term 1
	2022-06-01 11:17:24.666080 I | raft: b2c6679ac05f2cf1 became candidate at term 2
	2022-06-01 11:17:24.666097 I | raft: b2c6679ac05f2cf1 received MsgVoteResp from b2c6679ac05f2cf1 at term 2
	2022-06-01 11:17:24.666109 I | raft: b2c6679ac05f2cf1 became leader at term 2
	2022-06-01 11:17:24.666115 I | raft: raft.node: b2c6679ac05f2cf1 elected leader b2c6679ac05f2cf1 at term 2
	2022-06-01 11:17:24.666444 I | etcdserver: published {Name:old-k8s-version-20220601105850-6708 ClientURLs:[https://192.168.58.2:2379]} to cluster 3a56e4ca95e2355c
	2022-06-01 11:17:24.666483 I | embed: ready to serve client requests
	2022-06-01 11:17:24.666512 I | etcdserver: setting up the initial cluster version to 3.3
	2022-06-01 11:17:24.666542 I | embed: ready to serve client requests
	2022-06-01 11:17:24.667781 N | etcdserver/membership: set the initial cluster version to 3.3
	2022-06-01 11:17:24.667915 I | etcdserver/api: enabled capabilities for version 3.3
	2022-06-01 11:17:24.669123 I | embed: serving client requests on 127.0.0.1:2379
	2022-06-01 11:17:24.669320 I | embed: serving client requests on 192.168.58.2:2379
	
	* 
	* ==> kernel <==
	*  11:21:49 up  1:04,  0 users,  load average: 1.53, 1.77, 1.97
	Linux old-k8s-version-20220601105850-6708 5.13.0-1027-gcp #32~20.04.1-Ubuntu SMP Thu May 26 10:53:08 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.4 LTS"
	
	* 
	* ==> kube-apiserver [c6dd696a23428853e9dd6984647f57b50a36f6b1945411c85942976aea45fbac] <==
	* I0601 11:17:30.162652       1 controller.go:606] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0601 11:17:30.442712       1 controller.go:606] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	W0601 11:17:30.771674       1 lease.go:222] Resetting endpoints for master service "kubernetes" to [192.168.58.2]
	I0601 11:17:30.772243       1 controller.go:606] quota admission added evaluator for: endpoints
	I0601 11:17:31.664041       1 controller.go:606] quota admission added evaluator for: serviceaccounts
	I0601 11:17:31.871061       1 controller.go:606] quota admission added evaluator for: deployments.apps
	I0601 11:17:32.186169       1 controller.go:606] quota admission added evaluator for: daemonsets.apps
	I0601 11:17:46.882322       1 controller.go:606] quota admission added evaluator for: controllerrevisions.apps
	I0601 11:17:46.892269       1 controller.go:606] quota admission added evaluator for: events.events.k8s.io
	I0601 11:17:47.062764       1 controller.go:606] quota admission added evaluator for: replicasets.apps
	I0601 11:17:50.590123       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W0601 11:17:50.590198       1 handler_proxy.go:99] no RequestInfo found in the context
	E0601 11:17:50.590254       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0601 11:17:50.590264       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0601 11:18:50.590476       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W0601 11:18:50.590546       1 handler_proxy.go:99] no RequestInfo found in the context
	E0601 11:18:50.590576       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0601 11:18:50.590587       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0601 11:20:50.590842       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W0601 11:20:50.590929       1 handler_proxy.go:99] no RequestInfo found in the context
	E0601 11:20:50.591003       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0601 11:20:50.591021       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	* 
	* ==> kube-controller-manager [0347453bb77d9cbbda5d7387d32f01c8f751abedb22f454acabca801b977d1de] <==
	* I0601 11:17:49.060327       1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"dashboard-metrics-scraper-6b84985989", UID:"e374fcb6-df89-4d38-9f8d-90beaf4aa0ff", APIVersion:"apps/v1", ResourceVersion:"424", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "dashboard-metrics-scraper-6b84985989-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0601 11:17:49.061590       1 replica_set.go:450] Sync "kubernetes-dashboard/kubernetes-dashboard-6fb5469cf5" failed with pods "kubernetes-dashboard-6fb5469cf5-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0601 11:17:49.066424       1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"kubernetes-dashboard-6fb5469cf5", UID:"8824744d-93c2-4e3e-aa93-d8dbda713c82", APIVersion:"apps/v1", ResourceVersion:"429", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "kubernetes-dashboard-6fb5469cf5-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0601 11:17:49.066438       1 replica_set.go:450] Sync "kubernetes-dashboard/kubernetes-dashboard-6fb5469cf5" failed with pods "kubernetes-dashboard-6fb5469cf5-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0601 11:17:49.067555       1 replica_set.go:450] Sync "kubernetes-dashboard/dashboard-metrics-scraper-6b84985989" failed with pods "dashboard-metrics-scraper-6b84985989-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0601 11:17:49.067564       1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"dashboard-metrics-scraper-6b84985989", UID:"e374fcb6-df89-4d38-9f8d-90beaf4aa0ff", APIVersion:"apps/v1", ResourceVersion:"424", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "dashboard-metrics-scraper-6b84985989-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0601 11:17:49.069737       1 replica_set.go:450] Sync "kubernetes-dashboard/kubernetes-dashboard-6fb5469cf5" failed with pods "kubernetes-dashboard-6fb5469cf5-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0601 11:17:49.069728       1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"kubernetes-dashboard-6fb5469cf5", UID:"8824744d-93c2-4e3e-aa93-d8dbda713c82", APIVersion:"apps/v1", ResourceVersion:"429", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "kubernetes-dashboard-6fb5469cf5-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0601 11:17:49.156975       1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"kubernetes-dashboard-6fb5469cf5", UID:"8824744d-93c2-4e3e-aa93-d8dbda713c82", APIVersion:"apps/v1", ResourceVersion:"429", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kubernetes-dashboard-6fb5469cf5-8d9mk
	I0601 11:17:49.157021       1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"dashboard-metrics-scraper-6b84985989", UID:"e374fcb6-df89-4d38-9f8d-90beaf4aa0ff", APIVersion:"apps/v1", ResourceVersion:"424", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: dashboard-metrics-scraper-6b84985989-7n8xp
	I0601 11:17:49.675836       1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"metrics-server-6f89b5864b", UID:"f0a5c97d-6d1d-44ff-ab39-543786582653", APIVersion:"apps/v1", ResourceVersion:"389", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: metrics-server-6f89b5864b-hf7p6
	E0601 11:18:17.621000       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0601 11:18:19.367696       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0601 11:18:47.872510       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0601 11:18:51.369070       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0601 11:19:18.124080       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0601 11:19:23.370553       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0601 11:19:48.375536       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0601 11:19:55.372006       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0601 11:20:18.627063       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0601 11:20:27.373510       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0601 11:20:48.878473       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0601 11:20:59.374941       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0601 11:21:19.130031       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0601 11:21:31.376454       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	
	* 
	* ==> kube-proxy [bba155e14e8f3aac6b7847d5dd32a5f7b82602b1afa57eb4054e328a8e89213d] <==
	* W0601 11:17:47.485358       1 server_others.go:329] Flag proxy-mode="" unknown, assuming iptables proxy
	I0601 11:17:47.491254       1 node.go:135] Successfully retrieved node IP: 192.168.58.2
	I0601 11:17:47.491284       1 server_others.go:149] Using iptables Proxier.
	I0601 11:17:47.491627       1 server.go:529] Version: v1.16.0
	I0601 11:17:47.492168       1 config.go:131] Starting endpoints config controller
	I0601 11:17:47.492204       1 shared_informer.go:197] Waiting for caches to sync for endpoints config
	I0601 11:17:47.492231       1 config.go:313] Starting service config controller
	I0601 11:17:47.492247       1 shared_informer.go:197] Waiting for caches to sync for service config
	I0601 11:17:47.592479       1 shared_informer.go:204] Caches are synced for service config 
	I0601 11:17:47.592481       1 shared_informer.go:204] Caches are synced for endpoints config 
	
	* 
	* ==> kube-scheduler [a946b8ec63ccdd39b9f960ce249eaec023b354513cddc382bd365e4c96999dbd] <==
	* I0601 11:17:27.465268       1 deprecated_insecure_serving.go:51] Serving healthz insecurely on [::]:10251
	I0601 11:17:27.466328       1 secure_serving.go:123] Serving securely on 127.0.0.1:10259
	E0601 11:17:27.482456       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0601 11:17:27.482502       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0601 11:17:27.482606       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0601 11:17:27.483167       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0601 11:17:27.483240       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0601 11:17:27.554063       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0601 11:17:27.555765       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0601 11:17:27.555838       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0601 11:17:27.560122       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0601 11:17:27.560124       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0601 11:17:27.560197       1 reflector.go:123] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:236: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0601 11:17:28.554528       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0601 11:17:28.556334       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0601 11:17:28.558148       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0601 11:17:28.559674       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0601 11:17:28.560773       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0601 11:17:28.561950       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0601 11:17:28.563536       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0601 11:17:28.564733       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0601 11:17:28.565826       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0601 11:17:28.566988       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0601 11:17:28.568112       1 reflector.go:123] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:236: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0601 11:17:49.163701       1 factory.go:585] pod is already present in the activeQ
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Wed 2022-06-01 11:11:54 UTC, end at Wed 2022-06-01 11:21:49 UTC. --
	Jun 01 11:19:48 old-k8s-version-20220601105850-6708 kubelet[2976]: E0601 11:19:48.171227    2976 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Jun 01 11:19:53 old-k8s-version-20220601105850-6708 kubelet[2976]: E0601 11:19:53.172001    2976 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Jun 01 11:19:58 old-k8s-version-20220601105850-6708 kubelet[2976]: E0601 11:19:58.172795    2976 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Jun 01 11:20:03 old-k8s-version-20220601105850-6708 kubelet[2976]: E0601 11:20:03.173368    2976 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Jun 01 11:20:08 old-k8s-version-20220601105850-6708 kubelet[2976]: E0601 11:20:08.174138    2976 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Jun 01 11:20:13 old-k8s-version-20220601105850-6708 kubelet[2976]: E0601 11:20:13.174821    2976 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Jun 01 11:20:18 old-k8s-version-20220601105850-6708 kubelet[2976]: E0601 11:20:18.175705    2976 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Jun 01 11:20:23 old-k8s-version-20220601105850-6708 kubelet[2976]: E0601 11:20:23.176316    2976 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Jun 01 11:20:28 old-k8s-version-20220601105850-6708 kubelet[2976]: E0601 11:20:28.177075    2976 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Jun 01 11:20:33 old-k8s-version-20220601105850-6708 kubelet[2976]: E0601 11:20:33.177674    2976 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Jun 01 11:20:38 old-k8s-version-20220601105850-6708 kubelet[2976]: E0601 11:20:38.178485    2976 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Jun 01 11:20:43 old-k8s-version-20220601105850-6708 kubelet[2976]: E0601 11:20:43.179035    2976 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Jun 01 11:20:48 old-k8s-version-20220601105850-6708 kubelet[2976]: E0601 11:20:48.179820    2976 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Jun 01 11:20:53 old-k8s-version-20220601105850-6708 kubelet[2976]: E0601 11:20:53.180381    2976 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Jun 01 11:20:58 old-k8s-version-20220601105850-6708 kubelet[2976]: E0601 11:20:58.181131    2976 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Jun 01 11:21:03 old-k8s-version-20220601105850-6708 kubelet[2976]: E0601 11:21:03.181695    2976 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Jun 01 11:21:08 old-k8s-version-20220601105850-6708 kubelet[2976]: E0601 11:21:08.182601    2976 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Jun 01 11:21:13 old-k8s-version-20220601105850-6708 kubelet[2976]: E0601 11:21:13.183149    2976 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Jun 01 11:21:18 old-k8s-version-20220601105850-6708 kubelet[2976]: E0601 11:21:18.183970    2976 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Jun 01 11:21:23 old-k8s-version-20220601105850-6708 kubelet[2976]: E0601 11:21:23.184575    2976 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Jun 01 11:21:28 old-k8s-version-20220601105850-6708 kubelet[2976]: E0601 11:21:28.185766    2976 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Jun 01 11:21:33 old-k8s-version-20220601105850-6708 kubelet[2976]: E0601 11:21:33.187050    2976 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Jun 01 11:21:38 old-k8s-version-20220601105850-6708 kubelet[2976]: E0601 11:21:38.187898    2976 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Jun 01 11:21:43 old-k8s-version-20220601105850-6708 kubelet[2976]: E0601 11:21:43.190010    2976 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Jun 01 11:21:48 old-k8s-version-20220601105850-6708 kubelet[2976]: E0601 11:21:48.190799    2976 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-20220601105850-6708 -n old-k8s-version-20220601105850-6708
helpers_test.go:261: (dbg) Run:  kubectl --context old-k8s-version-20220601105850-6708 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:270: non-running pods: coredns-5644d7b6d9-86f9d metrics-server-6f89b5864b-hf7p6 storage-provisioner dashboard-metrics-scraper-6b84985989-7n8xp kubernetes-dashboard-6fb5469cf5-8d9mk
helpers_test.go:272: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: describe non-running pods <======
helpers_test.go:275: (dbg) Run:  kubectl --context old-k8s-version-20220601105850-6708 describe pod coredns-5644d7b6d9-86f9d metrics-server-6f89b5864b-hf7p6 storage-provisioner dashboard-metrics-scraper-6b84985989-7n8xp kubernetes-dashboard-6fb5469cf5-8d9mk
helpers_test.go:275: (dbg) Non-zero exit: kubectl --context old-k8s-version-20220601105850-6708 describe pod coredns-5644d7b6d9-86f9d metrics-server-6f89b5864b-hf7p6 storage-provisioner dashboard-metrics-scraper-6b84985989-7n8xp kubernetes-dashboard-6fb5469cf5-8d9mk: exit status 1 (55.821264ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-5644d7b6d9-86f9d" not found
	Error from server (NotFound): pods "metrics-server-6f89b5864b-hf7p6" not found
	Error from server (NotFound): pods "storage-provisioner" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-6b84985989-7n8xp" not found
	Error from server (NotFound): pods "kubernetes-dashboard-6fb5469cf5-8d9mk" not found

                                                
                                                
** /stderr **
helpers_test.go:277: kubectl --context old-k8s-version-20220601105850-6708 describe pod coredns-5644d7b6d9-86f9d metrics-server-6f89b5864b-hf7p6 storage-provisioner dashboard-metrics-scraper-6b84985989-7n8xp kubernetes-dashboard-6fb5469cf5-8d9mk: exit status 1
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (596.07s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (543.25s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:258: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-20220601110327-6708 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.23.6
E0601 11:16:55.975149    6708 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/addons-20220601102024-6708/client.crt: no such file or directory
E0601 11:17:12.929091    6708 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/addons-20220601102024-6708/client.crt: no such file or directory
E0601 11:17:21.870031    6708 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/functional-20220601102456-6708/client.crt: no such file or directory
E0601 11:17:44.596251    6708 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/kindnet-20220601104838-6708/client.crt: no such file or directory
E0601 11:17:54.652696    6708 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/enable-default-cni-20220601104837-6708/client.crt: no such file or directory
E0601 11:18:34.904823    6708 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/bridge-20220601104837-6708/client.crt: no such file or directory
E0601 11:19:09.242201    6708 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/calico-20220601104839-6708/client.crt: no such file or directory
E0601 11:19:09.247460    6708 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/calico-20220601104839-6708/client.crt: no such file or directory
E0601 11:19:09.257703    6708 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/calico-20220601104839-6708/client.crt: no such file or directory
E0601 11:19:09.277940    6708 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/calico-20220601104839-6708/client.crt: no such file or directory
E0601 11:19:09.318194    6708 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/calico-20220601104839-6708/client.crt: no such file or directory
E0601 11:19:09.398530    6708 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/calico-20220601104839-6708/client.crt: no such file or directory
E0601 11:19:09.559512    6708 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/calico-20220601104839-6708/client.crt: no such file or directory
E0601 11:19:09.879732    6708 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/calico-20220601104839-6708/client.crt: no such file or directory
E0601 11:19:10.520173    6708 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/calico-20220601104839-6708/client.crt: no such file or directory
E0601 11:19:11.800986    6708 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/calico-20220601104839-6708/client.crt: no such file or directory
E0601 11:19:14.361349    6708 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/calico-20220601104839-6708/client.crt: no such file or directory
E0601 11:19:17.698965    6708 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/enable-default-cni-20220601104837-6708/client.crt: no such file or directory
E0601 11:19:19.482010    6708 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/calico-20220601104839-6708/client.crt: no such file or directory
E0601 11:19:22.035105    6708 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/cilium-20220601104839-6708/client.crt: no such file or directory
E0601 11:19:29.722987    6708 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/calico-20220601104839-6708/client.crt: no such file or directory
E0601 11:19:31.087169    6708 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/ingress-addon-legacy-20220601102806-6708/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:258: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p embed-certs-20220601110327-6708 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.23.6: exit status 80 (9m1.301637127s)

                                                
                                                
-- stdout --
	* [embed-certs-20220601110327-6708] minikube v1.26.0-beta.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=14079
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	* Using the docker driver based on existing profile
	* Starting control plane node embed-certs-20220601110327-6708 in cluster embed-certs-20220601110327-6708
	* Pulling base image ...
	* Restarting existing docker container for "embed-certs-20220601110327-6708" ...
	* Preparing Kubernetes v1.23.6 on containerd 1.6.4 ...
	  - kubelet.cni-conf-dir=/etc/cni/net.mk
	* Configuring CNI (Container Networking Interface) ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image k8s.gcr.io/echoserver:1.4
	  - Using image kubernetesui/dashboard:v2.5.1
	  - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: default-storageclass, storage-provisioner, metrics-server, dashboard
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0601 11:16:27.030025  270029 out.go:296] Setting OutFile to fd 1 ...
	I0601 11:16:27.030200  270029 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0601 11:16:27.030210  270029 out.go:309] Setting ErrFile to fd 2...
	I0601 11:16:27.030214  270029 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0601 11:16:27.030316  270029 root.go:322] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/bin
	I0601 11:16:27.030590  270029 out.go:303] Setting JSON to false
	I0601 11:16:27.032104  270029 start.go:115] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":3541,"bootTime":1654078646,"procs":726,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.13.0-1027-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0601 11:16:27.032160  270029 start.go:125] virtualization: kvm guest
	I0601 11:16:27.034601  270029 out.go:177] * [embed-certs-20220601110327-6708] minikube v1.26.0-beta.1 on Ubuntu 20.04 (kvm/amd64)
	I0601 11:16:27.036027  270029 out.go:177]   - MINIKUBE_LOCATION=14079
	I0601 11:16:27.035970  270029 notify.go:193] Checking for updates...
	I0601 11:16:27.037352  270029 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0601 11:16:27.038882  270029 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig
	I0601 11:16:27.040231  270029 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube
	I0601 11:16:27.041542  270029 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0601 11:16:27.043240  270029 config.go:178] Loaded profile config "embed-certs-20220601110327-6708": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.6
	I0601 11:16:27.043659  270029 driver.go:358] Setting default libvirt URI to qemu:///system
	I0601 11:16:27.081227  270029 docker.go:137] docker version: linux-20.10.16
	I0601 11:16:27.081310  270029 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0601 11:16:27.182938  270029 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:76 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:44 OomKillDisable:true NGoroutines:44 SystemTime:2022-06-01 11:16:27.109556043 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1027-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662795776 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:20.10.16 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0601 11:16:27.183042  270029 docker.go:254] overlay module found
	I0601 11:16:27.185912  270029 out.go:177] * Using the docker driver based on existing profile
	I0601 11:16:27.187159  270029 start.go:284] selected driver: docker
	I0601 11:16:27.187172  270029 start.go:806] validating driver "docker" against &{Name:embed-certs-20220601110327-6708 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:embed-certs-20220601110327-6708 Namespace:d
efault APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.5.1@sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTim
eout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0601 11:16:27.187275  270029 start.go:817] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0601 11:16:27.188164  270029 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0601 11:16:27.287572  270029 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:76 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:44 OomKillDisable:true NGoroutines:44 SystemTime:2022-06-01 11:16:27.216523745 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1027-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662795776 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:20.10.16 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0601 11:16:27.287846  270029 start_flags.go:847] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0601 11:16:27.287899  270029 cni.go:95] Creating CNI manager for ""
	I0601 11:16:27.287909  270029 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0601 11:16:27.287923  270029 start_flags.go:306] config:
	{Name:embed-certs-20220601110327-6708 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:embed-certs-20220601110327-6708 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:clus
ter.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.5.1@sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: Mu
ltiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0601 11:16:27.290349  270029 out.go:177] * Starting control plane node embed-certs-20220601110327-6708 in cluster embed-certs-20220601110327-6708
	I0601 11:16:27.291691  270029 cache.go:120] Beginning downloading kic base image for docker with containerd
	I0601 11:16:27.292997  270029 out.go:177] * Pulling base image ...
	I0601 11:16:27.294363  270029 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime containerd
	I0601 11:16:27.294386  270029 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a in local docker daemon
	I0601 11:16:27.294393  270029 preload.go:148] Found local preload: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.6-containerd-overlay2-amd64.tar.lz4
	I0601 11:16:27.294434  270029 cache.go:57] Caching tarball of preloaded images
	I0601 11:16:27.295098  270029 preload.go:174] Found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.6-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0601 11:16:27.295161  270029 cache.go:60] Finished verifying existence of preloaded tar for  v1.23.6 on containerd
	I0601 11:16:27.295359  270029 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/embed-certs-20220601110327-6708/config.json ...
	I0601 11:16:27.338028  270029 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a in local docker daemon, skipping pull
	I0601 11:16:27.338057  270029 cache.go:141] gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a exists in daemon, skipping load
	I0601 11:16:27.338077  270029 cache.go:206] Successfully downloaded all kic artifacts
	I0601 11:16:27.338121  270029 start.go:352] acquiring machines lock for embed-certs-20220601110327-6708: {Name:mk2bc8f54b3ac1967b6e5e724f1be8808370dc1a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0601 11:16:27.338232  270029 start.go:356] acquired machines lock for "embed-certs-20220601110327-6708" in 83.619µs
	I0601 11:16:27.338252  270029 start.go:94] Skipping create...Using existing machine configuration
	I0601 11:16:27.338262  270029 fix.go:55] fixHost starting: 
	I0601 11:16:27.338520  270029 cli_runner.go:164] Run: docker container inspect embed-certs-20220601110327-6708 --format={{.State.Status}}
	I0601 11:16:27.369415  270029 fix.go:103] recreateIfNeeded on embed-certs-20220601110327-6708: state=Stopped err=<nil>
	W0601 11:16:27.369444  270029 fix.go:129] unexpected machine state, will restart: <nil>
	I0601 11:16:27.371758  270029 out.go:177] * Restarting existing docker container for "embed-certs-20220601110327-6708" ...
	I0601 11:16:27.373224  270029 cli_runner.go:164] Run: docker start embed-certs-20220601110327-6708
	I0601 11:16:27.750544  270029 cli_runner.go:164] Run: docker container inspect embed-certs-20220601110327-6708 --format={{.State.Status}}
	I0601 11:16:27.784421  270029 kic.go:416] container "embed-certs-20220601110327-6708" state is running.
	I0601 11:16:27.784842  270029 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-20220601110327-6708
	I0601 11:16:27.816168  270029 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/embed-certs-20220601110327-6708/config.json ...
	I0601 11:16:27.816441  270029 machine.go:88] provisioning docker machine ...
	I0601 11:16:27.816482  270029 ubuntu.go:169] provisioning hostname "embed-certs-20220601110327-6708"
	I0601 11:16:27.816529  270029 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220601110327-6708
	I0601 11:16:27.849760  270029 main.go:134] libmachine: Using SSH client type: native
	I0601 11:16:27.849917  270029 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7da240] 0x7dd2a0 <nil>  [] 0s} 127.0.0.1 49437 <nil> <nil>}
	I0601 11:16:27.849935  270029 main.go:134] libmachine: About to run SSH command:
	sudo hostname embed-certs-20220601110327-6708 && echo "embed-certs-20220601110327-6708" | sudo tee /etc/hostname
	I0601 11:16:27.850521  270029 main.go:134] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:54016->127.0.0.1:49437: read: connection reset by peer
	I0601 11:16:30.976432  270029 main.go:134] libmachine: SSH cmd err, output: <nil>: embed-certs-20220601110327-6708
	
	I0601 11:16:30.976514  270029 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220601110327-6708
	I0601 11:16:31.008861  270029 main.go:134] libmachine: Using SSH client type: native
	I0601 11:16:31.009014  270029 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7da240] 0x7dd2a0 <nil>  [] 0s} 127.0.0.1 49437 <nil> <nil>}
	I0601 11:16:31.009044  270029 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-20220601110327-6708' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-20220601110327-6708/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-20220601110327-6708' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0601 11:16:31.123496  270029 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0601 11:16:31.123529  270029 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/c
erts/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube}
	I0601 11:16:31.123570  270029 ubuntu.go:177] setting up certificates
	I0601 11:16:31.123582  270029 provision.go:83] configureAuth start
	I0601 11:16:31.123653  270029 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-20220601110327-6708
	I0601 11:16:31.154648  270029 provision.go:138] copyHostCerts
	I0601 11:16:31.154711  270029 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.pem, removing ...
	I0601 11:16:31.154718  270029 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.pem
	I0601 11:16:31.154779  270029 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.pem (1082 bytes)
	I0601 11:16:31.154874  270029 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cert.pem, removing ...
	I0601 11:16:31.154884  270029 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cert.pem
	I0601 11:16:31.154907  270029 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cert.pem (1123 bytes)
	I0601 11:16:31.155010  270029 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/key.pem, removing ...
	I0601 11:16:31.155022  270029 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/key.pem
	I0601 11:16:31.155045  270029 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/key.pem (1679 bytes)
	I0601 11:16:31.155086  270029 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca-key.pem org=jenkins.embed-certs-20220601110327-6708 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube embed-certs-20220601110327-6708]
	I0601 11:16:31.392219  270029 provision.go:172] copyRemoteCerts
	I0601 11:16:31.392269  270029 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0601 11:16:31.392296  270029 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220601110327-6708
	I0601 11:16:31.424693  270029 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49437 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/embed-certs-20220601110327-6708/id_rsa Username:docker}
	I0601 11:16:31.507177  270029 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0601 11:16:31.523691  270029 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/server.pem --> /etc/docker/server.pem (1265 bytes)
	I0601 11:16:31.539588  270029 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0601 11:16:31.556391  270029 provision.go:86] duration metric: configureAuth took 432.782419ms
	I0601 11:16:31.556423  270029 ubuntu.go:193] setting minikube options for container-runtime
	I0601 11:16:31.556601  270029 config.go:178] Loaded profile config "embed-certs-20220601110327-6708": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.6
	I0601 11:16:31.556613  270029 machine.go:91] provisioned docker machine in 3.740153286s
	I0601 11:16:31.556620  270029 start.go:306] post-start starting for "embed-certs-20220601110327-6708" (driver="docker")
	I0601 11:16:31.556627  270029 start.go:316] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0601 11:16:31.556665  270029 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0601 11:16:31.556708  270029 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220601110327-6708
	I0601 11:16:31.588692  270029 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49437 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/embed-certs-20220601110327-6708/id_rsa Username:docker}
	I0601 11:16:31.671170  270029 ssh_runner.go:195] Run: cat /etc/os-release
	I0601 11:16:31.673879  270029 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0601 11:16:31.673904  270029 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0601 11:16:31.673913  270029 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0601 11:16:31.673921  270029 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0601 11:16:31.673932  270029 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/addons for local assets ...
	I0601 11:16:31.673995  270029 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files for local assets ...
	I0601 11:16:31.674092  270029 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files/etc/ssl/certs/67082.pem -> 67082.pem in /etc/ssl/certs
	I0601 11:16:31.674203  270029 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0601 11:16:31.680491  270029 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files/etc/ssl/certs/67082.pem --> /etc/ssl/certs/67082.pem (1708 bytes)
	I0601 11:16:31.696768  270029 start.go:309] post-start completed in 140.137646ms
	I0601 11:16:31.696823  270029 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0601 11:16:31.696867  270029 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220601110327-6708
	I0601 11:16:31.728967  270029 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49437 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/embed-certs-20220601110327-6708/id_rsa Username:docker}
	I0601 11:16:31.808592  270029 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0601 11:16:31.813696  270029 fix.go:57] fixHost completed within 4.475428594s
	I0601 11:16:31.813724  270029 start.go:81] releasing machines lock for "embed-certs-20220601110327-6708", held for 4.475478152s
	I0601 11:16:31.813806  270029 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-20220601110327-6708
	I0601 11:16:31.845390  270029 ssh_runner.go:195] Run: systemctl --version
	I0601 11:16:31.845426  270029 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0601 11:16:31.845445  270029 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220601110327-6708
	I0601 11:16:31.845474  270029 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220601110327-6708
	I0601 11:16:31.878841  270029 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49437 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/embed-certs-20220601110327-6708/id_rsa Username:docker}
	I0601 11:16:31.879529  270029 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49437 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/embed-certs-20220601110327-6708/id_rsa Username:docker}
	I0601 11:16:31.984532  270029 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0601 11:16:31.995279  270029 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0601 11:16:32.004188  270029 docker.go:187] disabling docker service ...
	I0601 11:16:32.004230  270029 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0601 11:16:32.013110  270029 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0601 11:16:32.021544  270029 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0601 11:16:32.096568  270029 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0601 11:16:32.177406  270029 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0601 11:16:32.186287  270029 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0601 11:16:32.198554  270029 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*sandbox_image = .*$|sandbox_image = "k8s.gcr.io/pause:3.6"|' -i /etc/containerd/config.toml"
	I0601 11:16:32.206479  270029 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*restrict_oom_score_adj = .*$|restrict_oom_score_adj = false|' -i /etc/containerd/config.toml"
	I0601 11:16:32.214298  270029 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*SystemdCgroup = .*$|SystemdCgroup = false|' -i /etc/containerd/config.toml"
	I0601 11:16:32.221739  270029 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*conf_dir = .*$|conf_dir = "/etc/cni/net.mk"|' -i /etc/containerd/config.toml"
	I0601 11:16:32.229090  270029 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^# imports|imports = ["/etc/containerd/containerd.conf.d/02-containerd.conf"]|' -i /etc/containerd/config.toml"
	I0601 11:16:32.236531  270029 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc/containerd/containerd.conf.d && printf %s "dmVyc2lvbiA9IDIK" | base64 -d | sudo tee /etc/containerd/containerd.conf.d/02-containerd.conf"
	I0601 11:16:32.248478  270029 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0601 11:16:32.254712  270029 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0601 11:16:32.260784  270029 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0601 11:16:32.332262  270029 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0601 11:16:32.400990  270029 start.go:447] Will wait 60s for socket path /run/containerd/containerd.sock
	I0601 11:16:32.401055  270029 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0601 11:16:32.405246  270029 start.go:468] Will wait 60s for crictl version
	I0601 11:16:32.405339  270029 ssh_runner.go:195] Run: sudo crictl version
	I0601 11:16:32.431671  270029 retry.go:31] will retry after 11.04660288s: Temporary Error: sudo crictl version: Process exited with status 1
	stdout:
	
	stderr:
	time="2022-06-01T11:16:32Z" level=fatal msg="getting the runtime version: rpc error: code = Unknown desc = server is not initialized yet"
	I0601 11:16:43.479123  270029 ssh_runner.go:195] Run: sudo crictl version
	I0601 11:16:43.501672  270029 start.go:477] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.6.4
	RuntimeApiVersion:  v1alpha2
	I0601 11:16:43.501721  270029 ssh_runner.go:195] Run: containerd --version
	I0601 11:16:43.529392  270029 ssh_runner.go:195] Run: containerd --version
	I0601 11:16:43.558583  270029 out.go:177] * Preparing Kubernetes v1.23.6 on containerd 1.6.4 ...
	I0601 11:16:43.560125  270029 cli_runner.go:164] Run: docker network inspect embed-certs-20220601110327-6708 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0601 11:16:43.591406  270029 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I0601 11:16:43.594609  270029 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0601 11:16:43.605543  270029 out.go:177]   - kubelet.cni-conf-dir=/etc/cni/net.mk
	I0601 11:16:43.607033  270029 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime containerd
	I0601 11:16:43.607086  270029 ssh_runner.go:195] Run: sudo crictl images --output json
	I0601 11:16:43.629330  270029 containerd.go:547] all images are preloaded for containerd runtime.
	I0601 11:16:43.629349  270029 containerd.go:461] Images already preloaded, skipping extraction
	I0601 11:16:43.629396  270029 ssh_runner.go:195] Run: sudo crictl images --output json
	I0601 11:16:43.651491  270029 containerd.go:547] all images are preloaded for containerd runtime.
	I0601 11:16:43.651512  270029 cache_images.go:84] Images are preloaded, skipping loading
	I0601 11:16:43.651566  270029 ssh_runner.go:195] Run: sudo crictl info
	I0601 11:16:43.675463  270029 cni.go:95] Creating CNI manager for ""
	I0601 11:16:43.675488  270029 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0601 11:16:43.675505  270029 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0601 11:16:43.675522  270029 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.23.6 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-20220601110327-6708 NodeName:embed-certs-20220601110327-6708 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientC
AFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0601 11:16:43.675702  270029 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "embed-certs-20220601110327-6708"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.23.6
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0601 11:16:43.675851  270029 kubeadm.go:961] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.23.6/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cni-conf-dir=/etc/cni/net.mk --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=embed-certs-20220601110327-6708 --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.76.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.23.6 ClusterName:embed-certs-20220601110327-6708 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0601 11:16:43.675928  270029 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.23.6
	I0601 11:16:43.682788  270029 binaries.go:44] Found k8s binaries, skipping transfer
	I0601 11:16:43.682841  270029 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0601 11:16:43.689239  270029 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (576 bytes)
	I0601 11:16:43.701365  270029 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0601 11:16:43.712899  270029 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2060 bytes)
	I0601 11:16:43.724782  270029 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0601 11:16:43.727472  270029 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0601 11:16:43.736002  270029 certs.go:54] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/embed-certs-20220601110327-6708 for IP: 192.168.76.2
	I0601 11:16:43.736086  270029 certs.go:182] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.key
	I0601 11:16:43.736130  270029 certs.go:182] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/proxy-client-ca.key
	I0601 11:16:43.736196  270029 certs.go:298] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/embed-certs-20220601110327-6708/client.key
	I0601 11:16:43.736241  270029 certs.go:298] skipping minikube signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/embed-certs-20220601110327-6708/apiserver.key.31bdca25
	I0601 11:16:43.736273  270029 certs.go:298] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/embed-certs-20220601110327-6708/proxy-client.key
	I0601 11:16:43.736370  270029 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/6708.pem (1338 bytes)
	W0601 11:16:43.736396  270029 certs.go:384] ignoring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/6708_empty.pem, impossibly tiny 0 bytes
	I0601 11:16:43.736408  270029 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca-key.pem (1679 bytes)
	I0601 11:16:43.736433  270029 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca.pem (1082 bytes)
	I0601 11:16:43.736458  270029 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/cert.pem (1123 bytes)
	I0601 11:16:43.736488  270029 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/key.pem (1679 bytes)
	I0601 11:16:43.736535  270029 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files/etc/ssl/certs/67082.pem (1708 bytes)
	I0601 11:16:43.737038  270029 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/embed-certs-20220601110327-6708/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0601 11:16:43.753252  270029 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/embed-certs-20220601110327-6708/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0601 11:16:43.769071  270029 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/embed-certs-20220601110327-6708/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0601 11:16:43.785137  270029 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/embed-certs-20220601110327-6708/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0601 11:16:43.800815  270029 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0601 11:16:43.816567  270029 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0601 11:16:43.832435  270029 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0601 11:16:43.848147  270029 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0601 11:16:43.864438  270029 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0601 11:16:43.880361  270029 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/6708.pem --> /usr/share/ca-certificates/6708.pem (1338 bytes)
	I0601 11:16:43.896362  270029 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files/etc/ssl/certs/67082.pem --> /usr/share/ca-certificates/67082.pem (1708 bytes)
	I0601 11:16:43.912480  270029 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0601 11:16:43.924191  270029 ssh_runner.go:195] Run: openssl version
	I0601 11:16:43.928562  270029 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/67082.pem && ln -fs /usr/share/ca-certificates/67082.pem /etc/ssl/certs/67082.pem"
	I0601 11:16:43.935311  270029 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/67082.pem
	I0601 11:16:43.938057  270029 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Jun  1 10:24 /usr/share/ca-certificates/67082.pem
	I0601 11:16:43.938091  270029 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/67082.pem
	I0601 11:16:43.942508  270029 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/67082.pem /etc/ssl/certs/3ec20f2e.0"
	I0601 11:16:43.948891  270029 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0601 11:16:43.955605  270029 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0601 11:16:43.958385  270029 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Jun  1 10:20 /usr/share/ca-certificates/minikubeCA.pem
	I0601 11:16:43.958427  270029 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0601 11:16:43.962842  270029 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0601 11:16:43.969066  270029 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6708.pem && ln -fs /usr/share/ca-certificates/6708.pem /etc/ssl/certs/6708.pem"
	I0601 11:16:43.975850  270029 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6708.pem
	I0601 11:16:43.978786  270029 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Jun  1 10:24 /usr/share/ca-certificates/6708.pem
	I0601 11:16:43.978822  270029 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6708.pem
	I0601 11:16:43.983269  270029 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6708.pem /etc/ssl/certs/51391683.0"
	I0601 11:16:43.989455  270029 kubeadm.go:395] StartCluster: {Name:embed-certs-20220601110327-6708 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:embed-certs-20220601110327-6708 Namespace:default APIServerName
:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.5.1@sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledS
top:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0601 11:16:43.989553  270029 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0601 11:16:43.989584  270029 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0601 11:16:44.014119  270029 cri.go:87] found id: "f1db42bbd17faeafd4cd3be2854a922ebf474bf60a0d563922769dd22d828187"
	I0601 11:16:44.014147  270029 cri.go:87] found id: "4a8be0c7cfc5357e854ba7e1f8d7f90fdbacf08e294eb70b849c4501eb4b32b6"
	I0601 11:16:44.014155  270029 cri.go:87] found id: "d49ab0e8a34f4fc8f9cd1e7a8cba837be3548801379cde0d444aadf2a833b32a"
	I0601 11:16:44.014160  270029 cri.go:87] found id: "c32cb0a91408a09b3c11ff34a26cefeb792698bb4386b00d3bf469632b43a1d0"
	I0601 11:16:44.014169  270029 cri.go:87] found id: "a985029383eb2ce5944970221a3e1c9b33b522c0c97a0fb29c0b4c260cbf6a2f"
	I0601 11:16:44.014178  270029 cri.go:87] found id: "b8dd730d917c46f1998a84680d8f66c7fe1a671f92381a72b93c7e59652409f2"
	I0601 11:16:44.014195  270029 cri.go:87] found id: ""
	I0601 11:16:44.014231  270029 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0601 11:16:44.026017  270029 cri.go:114] JSON = null
	W0601 11:16:44.026068  270029 kubeadm.go:402] unpause failed: list paused: list returned 0 containers, but ps returned 6
	I0601 11:16:44.026121  270029 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0601 11:16:44.032599  270029 kubeadm.go:410] found existing configuration files, will attempt cluster restart
	I0601 11:16:44.032619  270029 kubeadm.go:626] restartCluster start
	I0601 11:16:44.032657  270029 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0601 11:16:44.038572  270029 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0601 11:16:44.039184  270029 kubeconfig.go:116] verify returned: extract IP: "embed-certs-20220601110327-6708" does not appear in /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig
	I0601 11:16:44.039521  270029 kubeconfig.go:127] "embed-certs-20220601110327-6708" context is missing from /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig - will repair!
	I0601 11:16:44.040098  270029 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig: {Name:mkb0e16236e54fbc8651999f1dd70854c53de7db Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0601 11:16:44.041394  270029 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0601 11:16:44.047555  270029 api_server.go:165] Checking apiserver status ...
	I0601 11:16:44.047587  270029 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 11:16:44.054922  270029 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 11:16:44.255283  270029 api_server.go:165] Checking apiserver status ...
	I0601 11:16:44.255367  270029 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 11:16:44.263875  270029 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 11:16:44.455148  270029 api_server.go:165] Checking apiserver status ...
	I0601 11:16:44.455218  270029 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 11:16:44.463550  270029 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 11:16:44.655853  270029 api_server.go:165] Checking apiserver status ...
	I0601 11:16:44.655952  270029 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 11:16:44.664417  270029 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 11:16:44.855542  270029 api_server.go:165] Checking apiserver status ...
	I0601 11:16:44.855598  270029 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 11:16:44.863960  270029 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 11:16:45.055126  270029 api_server.go:165] Checking apiserver status ...
	I0601 11:16:45.055211  270029 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 11:16:45.063480  270029 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 11:16:45.255826  270029 api_server.go:165] Checking apiserver status ...
	I0601 11:16:45.255924  270029 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 11:16:45.264353  270029 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 11:16:45.455654  270029 api_server.go:165] Checking apiserver status ...
	I0601 11:16:45.455728  270029 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 11:16:45.464072  270029 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 11:16:45.655400  270029 api_server.go:165] Checking apiserver status ...
	I0601 11:16:45.655474  270029 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 11:16:45.664018  270029 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 11:16:45.855135  270029 api_server.go:165] Checking apiserver status ...
	I0601 11:16:45.855220  270029 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 11:16:45.863919  270029 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 11:16:46.055139  270029 api_server.go:165] Checking apiserver status ...
	I0601 11:16:46.055234  270029 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 11:16:46.063984  270029 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 11:16:46.255234  270029 api_server.go:165] Checking apiserver status ...
	I0601 11:16:46.255309  270029 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 11:16:46.263465  270029 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 11:16:46.455752  270029 api_server.go:165] Checking apiserver status ...
	I0601 11:16:46.455834  270029 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 11:16:46.464271  270029 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 11:16:46.655553  270029 api_server.go:165] Checking apiserver status ...
	I0601 11:16:46.655615  270029 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 11:16:46.664388  270029 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 11:16:46.855579  270029 api_server.go:165] Checking apiserver status ...
	I0601 11:16:46.855653  270029 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 11:16:46.864130  270029 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 11:16:47.055676  270029 api_server.go:165] Checking apiserver status ...
	I0601 11:16:47.055754  270029 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 11:16:47.064444  270029 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 11:16:47.064468  270029 api_server.go:165] Checking apiserver status ...
	I0601 11:16:47.064499  270029 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 11:16:47.072080  270029 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 11:16:47.072108  270029 kubeadm.go:601] needs reconfigure: apiserver error: timed out waiting for the condition
	I0601 11:16:47.072115  270029 kubeadm.go:1092] stopping kube-system containers ...
	I0601 11:16:47.072127  270029 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I0601 11:16:47.072169  270029 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0601 11:16:47.097085  270029 cri.go:87] found id: "f1db42bbd17faeafd4cd3be2854a922ebf474bf60a0d563922769dd22d828187"
	I0601 11:16:47.097117  270029 cri.go:87] found id: "4a8be0c7cfc5357e854ba7e1f8d7f90fdbacf08e294eb70b849c4501eb4b32b6"
	I0601 11:16:47.097128  270029 cri.go:87] found id: "d49ab0e8a34f4fc8f9cd1e7a8cba837be3548801379cde0d444aadf2a833b32a"
	I0601 11:16:47.097138  270029 cri.go:87] found id: "c32cb0a91408a09b3c11ff34a26cefeb792698bb4386b00d3bf469632b43a1d0"
	I0601 11:16:47.097146  270029 cri.go:87] found id: "a985029383eb2ce5944970221a3e1c9b33b522c0c97a0fb29c0b4c260cbf6a2f"
	I0601 11:16:47.097156  270029 cri.go:87] found id: "b8dd730d917c46f1998a84680d8f66c7fe1a671f92381a72b93c7e59652409f2"
	I0601 11:16:47.097162  270029 cri.go:87] found id: ""
	I0601 11:16:47.097167  270029 cri.go:232] Stopping containers: [f1db42bbd17faeafd4cd3be2854a922ebf474bf60a0d563922769dd22d828187 4a8be0c7cfc5357e854ba7e1f8d7f90fdbacf08e294eb70b849c4501eb4b32b6 d49ab0e8a34f4fc8f9cd1e7a8cba837be3548801379cde0d444aadf2a833b32a c32cb0a91408a09b3c11ff34a26cefeb792698bb4386b00d3bf469632b43a1d0 a985029383eb2ce5944970221a3e1c9b33b522c0c97a0fb29c0b4c260cbf6a2f b8dd730d917c46f1998a84680d8f66c7fe1a671f92381a72b93c7e59652409f2]
	I0601 11:16:47.097217  270029 ssh_runner.go:195] Run: which crictl
	I0601 11:16:47.099999  270029 ssh_runner.go:195] Run: sudo /usr/bin/crictl stop f1db42bbd17faeafd4cd3be2854a922ebf474bf60a0d563922769dd22d828187 4a8be0c7cfc5357e854ba7e1f8d7f90fdbacf08e294eb70b849c4501eb4b32b6 d49ab0e8a34f4fc8f9cd1e7a8cba837be3548801379cde0d444aadf2a833b32a c32cb0a91408a09b3c11ff34a26cefeb792698bb4386b00d3bf469632b43a1d0 a985029383eb2ce5944970221a3e1c9b33b522c0c97a0fb29c0b4c260cbf6a2f b8dd730d917c46f1998a84680d8f66c7fe1a671f92381a72b93c7e59652409f2
	I0601 11:16:47.124540  270029 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0601 11:16:47.134618  270029 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0601 11:16:47.141742  270029 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5643 Jun  1 11:03 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5656 Jun  1 11:03 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2063 Jun  1 11:03 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5604 Jun  1 11:03 /etc/kubernetes/scheduler.conf
	
	I0601 11:16:47.141795  270029 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0601 11:16:47.148369  270029 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0601 11:16:47.154571  270029 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0601 11:16:47.160776  270029 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0601 11:16:47.160822  270029 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0601 11:16:47.166675  270029 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0601 11:16:47.172938  270029 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0601 11:16:47.172978  270029 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0601 11:16:47.179087  270029 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0601 11:16:47.185727  270029 kubeadm.go:703] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0601 11:16:47.185749  270029 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0601 11:16:47.228261  270029 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0601 11:16:48.197494  270029 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0601 11:16:48.329624  270029 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0601 11:16:48.378681  270029 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0601 11:16:48.420684  270029 api_server.go:51] waiting for apiserver process to appear ...
	I0601 11:16:48.420732  270029 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:16:48.929035  270029 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:16:49.428979  270029 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:16:49.928976  270029 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:16:50.428888  270029 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:16:50.928698  270029 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:16:51.428664  270029 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:16:51.929701  270029 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:16:52.429050  270029 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:16:52.928894  270029 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:16:53.429111  270029 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:16:53.929528  270029 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:16:54.429038  270029 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:16:54.463645  270029 api_server.go:71] duration metric: took 6.042967785s to wait for apiserver process to appear ...
	I0601 11:16:54.463674  270029 api_server.go:87] waiting for apiserver healthz status ...
	I0601 11:16:54.463686  270029 api_server.go:240] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0601 11:16:54.464059  270029 api_server.go:256] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0601 11:16:54.964315  270029 api_server.go:240] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0601 11:16:57.340901  270029 api_server.go:266] https://192.168.76.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0601 11:16:57.340932  270029 api_server.go:102] status: https://192.168.76.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0601 11:16:57.464200  270029 api_server.go:240] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0601 11:16:57.470124  270029 api_server.go:266] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0601 11:16:57.470161  270029 api_server.go:102] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0601 11:16:57.964628  270029 api_server.go:240] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0601 11:16:57.969079  270029 api_server.go:266] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0601 11:16:57.969109  270029 api_server.go:102] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0601 11:16:58.464413  270029 api_server.go:240] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0601 11:16:58.469280  270029 api_server.go:266] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0601 11:16:58.469323  270029 api_server.go:102] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0601 11:16:58.964873  270029 api_server.go:240] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0601 11:16:58.969629  270029 api_server.go:266] https://192.168.76.2:8443/healthz returned 200:
	ok
	I0601 11:16:58.976323  270029 api_server.go:140] control plane version: v1.23.6
	I0601 11:16:58.976349  270029 api_server.go:130] duration metric: took 4.512668885s to wait for apiserver health ...
	I0601 11:16:58.976362  270029 cni.go:95] Creating CNI manager for ""
	I0601 11:16:58.976370  270029 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0601 11:16:58.978490  270029 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0601 11:16:58.979893  270029 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0601 11:16:58.983633  270029 cni.go:189] applying CNI manifest using /var/lib/minikube/binaries/v1.23.6/kubectl ...
	I0601 11:16:58.983655  270029 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0601 11:16:58.996686  270029 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0601 11:16:59.594447  270029 system_pods.go:43] waiting for kube-system pods to appear ...
	I0601 11:16:59.601657  270029 system_pods.go:59] 9 kube-system pods found
	I0601 11:16:59.601692  270029 system_pods.go:61] "coredns-64897985d-9dpfv" [2fd986d2-2806-41d0-b75f-04a9f5883420] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
	I0601 11:16:59.601699  270029 system_pods.go:61] "etcd-embed-certs-20220601110327-6708" [696f91cd-2833-44cc-80cb-7cff571b5b35] Running
	I0601 11:16:59.601709  270029 system_pods.go:61] "kindnet-92tfl" [1e2e52a8-4f89-49af-9741-f79384628a29] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0601 11:16:59.601719  270029 system_pods.go:61] "kube-apiserver-embed-certs-20220601110327-6708" [a1b6d250-97ce-4261-983a-a43004795368] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0601 11:16:59.601741  270029 system_pods.go:61] "kube-controller-manager-embed-certs-20220601110327-6708" [2f9b6898-a046-4ff4-8a25-f38e0bfc8ebd] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0601 11:16:59.601766  270029 system_pods.go:61] "kube-proxy-99lsz" [c2f232c6-4807-4bcf-a1ca-c39489a0112a] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0601 11:16:59.601778  270029 system_pods.go:61] "kube-scheduler-embed-certs-20220601110327-6708" [846abe25-58d2-4c73-8fb2-bd8f7d4cd289] Running
	I0601 11:16:59.601786  270029 system_pods.go:61] "metrics-server-b955d9d8-c4kht" [b1221545-5b1f-4fd0-9d91-732fae262566] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
	I0601 11:16:59.601813  270029 system_pods.go:61] "storage-provisioner" [8d62c4a6-0f6f-4855-adc3-3347614c0287] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
	I0601 11:16:59.601825  270029 system_pods.go:74] duration metric: took 7.351583ms to wait for pod list to return data ...
	I0601 11:16:59.601839  270029 node_conditions.go:102] verifying NodePressure condition ...
	I0601 11:16:59.604272  270029 node_conditions.go:122] node storage ephemeral capacity is 304695084Ki
	I0601 11:16:59.604298  270029 node_conditions.go:123] node cpu capacity is 8
	I0601 11:16:59.604311  270029 node_conditions.go:105] duration metric: took 2.462157ms to run NodePressure ...
	I0601 11:16:59.604330  270029 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0601 11:16:59.726966  270029 kubeadm.go:762] waiting for restarted kubelet to initialise ...
	I0601 11:16:59.731041  270029 kubeadm.go:777] kubelet initialised
	I0601 11:16:59.731062  270029 kubeadm.go:778] duration metric: took 4.07535ms waiting for restarted kubelet to initialise ...
	I0601 11:16:59.731070  270029 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0601 11:16:59.737745  270029 pod_ready.go:78] waiting up to 4m0s for pod "coredns-64897985d-9dpfv" in "kube-system" namespace to be "Ready" ...
	I0601 11:17:01.743720  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:17:04.243101  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:17:06.743027  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:17:09.243031  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:17:11.742744  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:17:13.743254  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:17:16.242930  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:17:18.243905  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:17:20.743239  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:17:23.242797  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:17:25.244019  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:17:27.743633  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:17:30.243041  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:17:32.243074  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:17:34.742805  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:17:36.743017  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:17:38.743858  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:17:41.242806  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:17:43.243005  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:17:45.742725  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:17:47.745681  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:17:50.242914  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:17:52.742723  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:17:54.742788  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:17:56.742912  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:17:58.743796  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:18:01.242721  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:18:03.243620  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:18:05.742777  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:18:07.742900  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:18:09.743030  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:18:12.242806  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:18:14.243041  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:18:16.742612  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:18:18.742966  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:18:21.243088  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:18:23.245196  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:18:25.742790  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:18:27.743212  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:18:30.243627  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:18:32.743319  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:18:35.242980  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:18:37.243134  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:18:39.742862  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:18:42.242887  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:18:44.243121  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:18:46.742692  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:18:48.742793  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:18:51.243730  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:18:53.742610  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:18:55.742817  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:18:57.742887  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:19:00.244895  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:19:02.743210  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:19:05.242775  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:19:07.243536  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:19:09.743457  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:19:11.743691  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:19:13.743775  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:19:16.242793  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:19:18.243014  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:19:20.742913  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:19:22.743021  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:19:24.743212  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:19:27.243432  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:19:29.743172  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:19:32.242892  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:19:34.242952  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:19:36.742808  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:19:38.743236  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:19:41.243973  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:19:43.744678  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:19:46.242801  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:19:48.243565  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:19:50.743062  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:19:53.243194  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:19:55.243470  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:19:57.743387  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:20:00.243011  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:20:02.243060  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:20:04.243463  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:20:06.244423  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:20:08.742836  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:20:11.242670  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:20:13.243691  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:20:15.243798  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:20:17.742741  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:20:20.244002  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:20:22.743507  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:20:25.242700  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:20:27.243486  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:20:29.742717  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:20:31.742800  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:20:34.242643  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:20:36.243891  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:20:38.243963  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:20:40.743349  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:20:42.743398  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:20:45.243686  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:20:47.742813  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:20:50.244047  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:20:52.743394  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:20:54.743515  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:20:57.242819  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:20:59.243739  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:20:59.739945  270029 pod_ready.go:81] duration metric: took 4m0.002166585s waiting for pod "coredns-64897985d-9dpfv" in "kube-system" namespace to be "Ready" ...
	E0601 11:20:59.739968  270029 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "coredns-64897985d-9dpfv" in "kube-system" namespace to be "Ready" (will not retry!)
	I0601 11:20:59.739995  270029 pod_ready.go:38] duration metric: took 4m0.008917217s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0601 11:20:59.740018  270029 kubeadm.go:630] restartCluster took 4m15.707393707s
	W0601 11:20:59.740131  270029 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0601 11:20:59.740156  270029 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I0601 11:21:01.430762  270029 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force": (1.690579833s)
	I0601 11:21:01.430838  270029 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0601 11:21:01.440364  270029 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0601 11:21:01.447145  270029 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0601 11:21:01.447194  270029 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0601 11:21:01.453852  270029 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0601 11:21:01.453891  270029 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0601 11:21:01.701224  270029 out.go:204]   - Generating certificates and keys ...
	I0601 11:21:02.294583  270029 out.go:204]   - Booting up control plane ...
	I0601 11:21:14.337355  270029 out.go:204]   - Configuring RBAC rules ...
	I0601 11:21:14.750718  270029 cni.go:95] Creating CNI manager for ""
	I0601 11:21:14.750741  270029 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0601 11:21:14.752905  270029 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0601 11:21:14.754285  270029 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0601 11:21:14.758047  270029 cni.go:189] applying CNI manifest using /var/lib/minikube/binaries/v1.23.6/kubectl ...
	I0601 11:21:14.758065  270029 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0601 11:21:14.771201  270029 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0601 11:21:15.434277  270029 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0601 11:21:15.434380  270029 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:21:15.434381  270029 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl label nodes minikube.k8s.io/version=v1.26.0-beta.1 minikube.k8s.io/commit=4a356b2b7b41c6be3e1e342298908c27bb98ce92 minikube.k8s.io/name=embed-certs-20220601110327-6708 minikube.k8s.io/updated_at=2022_06_01T11_21_15_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:21:15.489119  270029 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:21:15.489208  270029 ops.go:34] apiserver oom_adj: -16
	I0601 11:21:16.079192  270029 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:21:16.579319  270029 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:21:17.079349  270029 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:21:17.579548  270029 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:21:18.079683  270029 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:21:18.579186  270029 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:21:19.079819  270029 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:21:19.579346  270029 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:21:20.079183  270029 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:21:20.579984  270029 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:21:21.079335  270029 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:21:21.579766  270029 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:21:22.079321  270029 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:21:22.579993  270029 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:21:23.079856  270029 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:21:23.579743  270029 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:21:24.079256  270029 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:21:24.579276  270029 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:21:25.079828  270029 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:21:25.579763  270029 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:21:26.080068  270029 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:21:26.579388  270029 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:21:27.079269  270029 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:21:27.579729  270029 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:21:27.636171  270029 kubeadm.go:1045] duration metric: took 12.201851278s to wait for elevateKubeSystemPrivileges.
	I0601 11:21:27.636205  270029 kubeadm.go:397] StartCluster complete in 4m43.646757592s
	I0601 11:21:27.636227  270029 settings.go:142] acquiring lock: {Name:mk20a847233fa50399ab0a24280bffb8d8dbd41b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0601 11:21:27.636334  270029 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig
	I0601 11:21:27.637880  270029 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig: {Name:mkb0e16236e54fbc8651999f1dd70854c53de7db Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0601 11:21:28.157076  270029 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "embed-certs-20220601110327-6708" rescaled to 1
	I0601 11:21:28.157150  270029 start.go:208] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0601 11:21:28.157180  270029 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0601 11:21:28.159818  270029 out.go:177] * Verifying Kubernetes components...
	I0601 11:21:28.157185  270029 addons.go:415] enableAddons start: toEnable=map[dashboard:true metrics-server:true], additional=[]
	I0601 11:21:28.157406  270029 config.go:178] Loaded profile config "embed-certs-20220601110327-6708": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.6
	I0601 11:21:28.161484  270029 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0601 11:21:28.161496  270029 addons.go:65] Setting metrics-server=true in profile "embed-certs-20220601110327-6708"
	I0601 11:21:28.161511  270029 addons.go:153] Setting addon metrics-server=true in "embed-certs-20220601110327-6708"
	W0601 11:21:28.161523  270029 addons.go:165] addon metrics-server should already be in state true
	I0601 11:21:28.161537  270029 addons.go:65] Setting dashboard=true in profile "embed-certs-20220601110327-6708"
	I0601 11:21:28.161566  270029 addons.go:153] Setting addon dashboard=true in "embed-certs-20220601110327-6708"
	I0601 11:21:28.161573  270029 host.go:66] Checking if "embed-certs-20220601110327-6708" exists ...
	W0601 11:21:28.161579  270029 addons.go:165] addon dashboard should already be in state true
	I0601 11:21:28.161483  270029 addons.go:65] Setting storage-provisioner=true in profile "embed-certs-20220601110327-6708"
	I0601 11:21:28.161622  270029 addons.go:153] Setting addon storage-provisioner=true in "embed-certs-20220601110327-6708"
	W0601 11:21:28.161631  270029 addons.go:165] addon storage-provisioner should already be in state true
	I0601 11:21:28.161636  270029 host.go:66] Checking if "embed-certs-20220601110327-6708" exists ...
	I0601 11:21:28.161669  270029 host.go:66] Checking if "embed-certs-20220601110327-6708" exists ...
	I0601 11:21:28.161500  270029 addons.go:65] Setting default-storageclass=true in profile "embed-certs-20220601110327-6708"
	I0601 11:21:28.161709  270029 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-20220601110327-6708"
	I0601 11:21:28.161949  270029 cli_runner.go:164] Run: docker container inspect embed-certs-20220601110327-6708 --format={{.State.Status}}
	I0601 11:21:28.162094  270029 cli_runner.go:164] Run: docker container inspect embed-certs-20220601110327-6708 --format={{.State.Status}}
	I0601 11:21:28.162123  270029 cli_runner.go:164] Run: docker container inspect embed-certs-20220601110327-6708 --format={{.State.Status}}
	I0601 11:21:28.162229  270029 cli_runner.go:164] Run: docker container inspect embed-certs-20220601110327-6708 --format={{.State.Status}}
	I0601 11:21:28.209663  270029 out.go:177]   - Using image k8s.gcr.io/echoserver:1.4
	I0601 11:21:28.211523  270029 out.go:177]   - Using image kubernetesui/dashboard:v2.5.1
	I0601 11:21:28.213009  270029 addons.go:348] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0601 11:21:28.213030  270029 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0601 11:21:28.213079  270029 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220601110327-6708
	I0601 11:21:28.216922  270029 out.go:177]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I0601 11:21:28.218989  270029 addons.go:348] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0601 11:21:28.217201  270029 addons.go:153] Setting addon default-storageclass=true in "embed-certs-20220601110327-6708"
	W0601 11:21:28.219035  270029 addons.go:165] addon default-storageclass should already be in state true
	I0601 11:21:28.219075  270029 host.go:66] Checking if "embed-certs-20220601110327-6708" exists ...
	I0601 11:21:28.219579  270029 cli_runner.go:164] Run: docker container inspect embed-certs-20220601110327-6708 --format={{.State.Status}}
	I0601 11:21:28.219012  270029 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0601 11:21:28.219781  270029 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220601110327-6708
	I0601 11:21:28.236451  270029 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0601 11:21:28.238138  270029 addons.go:348] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0601 11:21:28.238163  270029 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0601 11:21:28.238217  270029 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220601110327-6708
	I0601 11:21:28.246850  270029 node_ready.go:35] waiting up to 6m0s for node "embed-certs-20220601110327-6708" to be "Ready" ...
	I0601 11:21:28.246885  270029 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0601 11:21:28.273680  270029 addons.go:348] installing /etc/kubernetes/addons/storageclass.yaml
	I0601 11:21:28.273707  270029 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0601 11:21:28.273761  270029 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220601110327-6708
	I0601 11:21:28.278846  270029 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49437 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/embed-certs-20220601110327-6708/id_rsa Username:docker}
	I0601 11:21:28.279320  270029 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49437 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/embed-certs-20220601110327-6708/id_rsa Username:docker}
	I0601 11:21:28.286384  270029 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49437 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/embed-certs-20220601110327-6708/id_rsa Username:docker}
	I0601 11:21:28.321729  270029 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49437 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/embed-certs-20220601110327-6708/id_rsa Username:docker}
	I0601 11:21:28.455756  270029 addons.go:348] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0601 11:21:28.455785  270029 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1849 bytes)
	I0601 11:21:28.466348  270029 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0601 11:21:28.469026  270029 addons.go:348] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0601 11:21:28.469067  270029 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0601 11:21:28.469486  270029 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0601 11:21:28.478076  270029 addons.go:348] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0601 11:21:28.478099  270029 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0601 11:21:28.487008  270029 addons.go:348] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0601 11:21:28.487036  270029 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0601 11:21:28.573106  270029 addons.go:348] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0601 11:21:28.573135  270029 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0601 11:21:28.574698  270029 start.go:806] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS
	I0601 11:21:28.577019  270029 addons.go:348] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0601 11:21:28.577042  270029 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0601 11:21:28.653936  270029 addons.go:348] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0601 11:21:28.653967  270029 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4196 bytes)
	I0601 11:21:28.658482  270029 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0601 11:21:28.671762  270029 addons.go:348] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0601 11:21:28.671808  270029 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0601 11:21:28.758424  270029 addons.go:348] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0601 11:21:28.758516  270029 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0601 11:21:28.776703  270029 addons.go:348] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0601 11:21:28.776735  270029 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0601 11:21:28.794636  270029 addons.go:348] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0601 11:21:28.794670  270029 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0601 11:21:28.959418  270029 addons.go:348] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0601 11:21:28.959449  270029 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0601 11:21:28.976465  270029 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0601 11:21:29.354605  270029 addons.go:386] Verifying addon metrics-server=true in "embed-certs-20220601110327-6708"
	I0601 11:21:29.699561  270029 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server, dashboard
	I0601 11:21:29.700807  270029 addons.go:417] enableAddons completed in 1.543631535s
	I0601 11:21:30.260215  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:21:32.260412  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:21:34.760173  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:21:36.760442  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:21:38.760533  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:21:40.761060  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:21:43.259684  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:21:45.260363  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:21:47.760960  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:21:50.260784  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:21:52.760281  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:21:55.259912  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:21:57.259956  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:21:59.759755  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:22:01.759853  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:22:03.760721  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:22:06.260069  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:22:08.260739  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:22:10.760237  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:22:13.259813  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:22:15.260153  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:22:17.260859  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:22:19.759997  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:22:21.760654  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:22:24.260433  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:22:26.760129  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:22:28.760717  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:22:31.260368  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:22:33.760112  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:22:35.760758  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:22:38.260723  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:22:40.760393  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:22:43.259823  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:22:45.260551  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:22:47.760311  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:22:49.760404  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:22:52.260594  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:22:54.760044  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:22:56.760073  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:22:58.760157  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:23:01.260267  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:23:03.260561  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:23:05.260780  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:23:07.760513  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:23:10.260326  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:23:12.260674  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:23:14.260918  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:23:16.760064  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:23:18.760686  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:23:21.260676  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:23:23.760024  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:23:26.259746  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:23:28.260714  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:23:30.760541  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:23:33.260035  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:23:35.261060  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:23:37.760144  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:23:40.260334  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:23:42.759808  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:23:44.759997  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:23:46.760285  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:23:48.760374  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:23:51.260999  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:23:53.760587  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:23:56.260172  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:23:58.759799  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:24:00.760631  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:24:03.260687  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:24:05.260722  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:24:07.760567  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:24:10.260596  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:24:12.260967  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:24:14.759793  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:24:16.760292  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:24:18.760531  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:24:20.760689  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:24:22.761011  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:24:25.261206  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:24:27.261441  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:24:29.759885  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:24:32.260145  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:24:34.260990  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:24:36.760710  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:24:39.259837  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:24:41.260124  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:24:43.260473  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:24:45.760870  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:24:48.260450  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:24:50.260933  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:24:52.261071  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:24:54.261578  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:24:56.760078  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:24:58.760245  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:25:00.760344  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:25:03.260144  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:25:05.260531  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:25:07.760027  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:25:09.760904  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:25:12.260100  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:25:14.759992  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:25:16.760260  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:25:19.260136  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:25:21.260700  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:25:23.760875  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:25:26.261082  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:25:28.263320  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:25:28.263343  270029 node_ready.go:38] duration metric: took 4m0.016466534s waiting for node "embed-certs-20220601110327-6708" to be "Ready" ...
	I0601 11:25:28.265930  270029 out.go:177] 
	W0601 11:25:28.267524  270029 out.go:239] X Exiting due to GUEST_START: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: timed out waiting for the condition
	X Exiting due to GUEST_START: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: timed out waiting for the condition
	W0601 11:25:28.267549  270029 out.go:239] * 
	* 
	W0601 11:25:28.268404  270029 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0601 11:25:28.269962  270029 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:261: failed to start minikube post-stop. args "out/minikube-linux-amd64 start -p embed-certs-20220601110327-6708 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.23.6": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/embed-certs/serial/SecondStart]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect embed-certs-20220601110327-6708
helpers_test.go:235: (dbg) docker inspect embed-certs-20220601110327-6708:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "b77a5d5e61bfa6e31aa165e07cef7da486c7219a464787e9c662bc91861f785d",
	        "Created": "2022-06-01T11:03:36.104826313Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 270313,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-06-01T11:16:27.742788253Z",
	            "FinishedAt": "2022-06-01T11:16:26.518323114Z"
	        },
	        "Image": "sha256:5fc9565d342f677dd8987c0c7656d8d58147ab45c932c9076935b38a770e4cf7",
	        "ResolvConfPath": "/var/lib/docker/containers/b77a5d5e61bfa6e31aa165e07cef7da486c7219a464787e9c662bc91861f785d/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/b77a5d5e61bfa6e31aa165e07cef7da486c7219a464787e9c662bc91861f785d/hostname",
	        "HostsPath": "/var/lib/docker/containers/b77a5d5e61bfa6e31aa165e07cef7da486c7219a464787e9c662bc91861f785d/hosts",
	        "LogPath": "/var/lib/docker/containers/b77a5d5e61bfa6e31aa165e07cef7da486c7219a464787e9c662bc91861f785d/b77a5d5e61bfa6e31aa165e07cef7da486c7219a464787e9c662bc91861f785d-json.log",
	        "Name": "/embed-certs-20220601110327-6708",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-20220601110327-6708:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "embed-certs-20220601110327-6708",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/af793a78707e49005a668827ee182b67b74ca83491cbfb43256e792a6be931d8-init/diff:/var/lib/docker/overlay2/b8d8db1a2aa7e68bc30cc8784920412f67de75d287f26f41df14fe77e9cd01aa/diff:/var/lib/docker/overlay2/e05914bc392f34941f075ddf1f08984ba653e41980a3a12b3f0ec7bc66faed2c/diff:/var/lib/docker/overlay2/311290be1e68cfaf248422398f2ee037b662e83e8a80c393cf1f35af46c871e3/diff:/var/lib/docker/overlay2/df5bd86ed0e0c9740e1a389f0dd435837b730043ff123ee198aa28f82511f21c/diff:/var/lib/docker/overlay2/52ebf0baed5a1f4378d84fd6b94c9c1756fef7c7837d0766f77ef29b2104f39c/diff:/var/lib/docker/overlay2/19fdc64f1948c51309fb85761a84e7787a06f91f43e3d554c0507c5291bda802/diff:/var/lib/docker/overlay2/8fbe94a5f5ec7e7b87e3294d917aaba04c0d5a1072a4876ca3171f7b416e78d1/diff:/var/lib/docker/overlay2/3f2aec9d3b6e64c9251bd84e36639908a944a57741768984d5a66f5f34ed2288/diff:/var/lib/docker/overlay2/f4b59ae0a52d88a0325a281689c43f769408e14f70b9c94b771e70af140d7538/diff:/var/lib/docker/overlay2/1b9610
0ef86fc38fba7ce796c89f6a0f0c16c7fc1a94ff1a7f2021a01dd5471e/diff:/var/lib/docker/overlay2/10388fac28961ac0e67628b98602c8c77b04d12b21cd25a8d9adc05c1261252b/diff:/var/lib/docker/overlay2/efcc44ba0e0e6acd23e74db49106211785c2519f22b858252b43b17e927095e4/diff:/var/lib/docker/overlay2/3dbc9b9ec4e689c631fadb88ec67ad1f44f1059642f0d9e9740fa4523681476a/diff:/var/lib/docker/overlay2/196519dbdfaceb51fe77bd0517b4a726c63133e2f1a44cc71539d859f920996f/diff:/var/lib/docker/overlay2/bf269014ddb2ded291bc1f7a299a168567e8ffb016d7d4ba5ad3681eb1c988ba/diff:/var/lib/docker/overlay2/dc3a1c86dd14e2cba77aa4a9424d61aa34efc817c2c14b8f79d66fb1c19132ea/diff:/var/lib/docker/overlay2/7151cfa81a30a8e2bacf0e7e97eb21e825844e9ee99d4d774c443cf4df3042bf/diff:/var/lib/docker/overlay2/4827c7b9079e215b353e51dd539d7e28bf73341ea0a494df4654e9fd1b53d16c/diff:/var/lib/docker/overlay2/2da0eda04306aaeacd19a5cc85b885230b88e74f1791bdb022ebf4b39d85fae2/diff:/var/lib/docker/overlay2/1a0bdf8971fb0a44ff0e168623dbbb2d84b8e93f6d20d9351efd68470e4d4851/diff:/var/lib/d
ocker/overlay2/9ced6f582fc4ce00064f1f5e6ac922d838648fe94afc8e015c309e04004f10ca/diff:/var/lib/docker/overlay2/dd6d4c3166eb565aff2516953d5a8877a204214f2436d414475132ae70429cf7/diff:/var/lib/docker/overlay2/a1ace060e85891d54b26ff4be9fcbce36ffbb15cc4061eb4ccf0add8f82783df/diff:/var/lib/docker/overlay2/bc8b93bfba93e7da2c573ae8b6405ebff526153f6a8b0659aebaf044dc7e8f43/diff:/var/lib/docker/overlay2/c6292624b658b5761ddc277e4b50f1bd9d32fb9a2ad4a01647d6482fa0d29eb3/diff:/var/lib/docker/overlay2/cfe8f35eeb0a80a3c747eac0dfd9195da1fa3d9f92f0b7866d46f3517f3b10ee/diff:/var/lib/docker/overlay2/90bc3b9378b5761de7575f1a82d48e4b4ebf50af153eafa5a565585e136b87f8/diff:/var/lib/docker/overlay2/e1b928a2483870df7fdf4adb3b4002f9effe1db7fbff925b24005f47290b2915/diff:/var/lib/docker/overlay2/4758c75ab63fd3f43ae7b654bc93566ded69acea0e92caf3043ef6eeeec9ca1b/diff:/var/lib/docker/overlay2/8558da5809877030d87982052e5011fb04b8bb9646d0f3c1d4aa10f2d7926592/diff:/var/lib/docker/overlay2/f6400cfae811f55736f55064c8949e18ac2dc1175a5149bb0382a79fa92
4f3f3/diff:/var/lib/docker/overlay2/875d5278ff8445b84261d4586c6a14fbbd9c13beff1fe9252591162e4d91a0dc/diff:/var/lib/docker/overlay2/12d9f85a229e1386e37fb609020fdcb5535c67ce811012fd646182559e4ee754/diff:/var/lib/docker/overlay2/d5c5b85272d7a8b0b62da488c8cba8d883a0adcfc9f1b2f6ad2f856b4f13e5f7/diff:/var/lib/docker/overlay2/5f6feb9e059c22491e287804f38f097fda984dc9376ba28ae810e13dcf27394f/diff:/var/lib/docker/overlay2/113a715b1135f09b959295e960aeaa36846ad54a6fe46fdd53d061bc3fe114e3/diff:/var/lib/docker/overlay2/265096a57a98130b8073aa41d0a22f0ada5a391943e490ac1c281a634e22cba0/diff:/var/lib/docker/overlay2/15f57e6dc9a6b382240a9335aae067454048b2eb6d0094f2d3c8c115be34311a/diff:/var/lib/docker/overlay2/45ca8d44d46a41b4493f52c0048d08b8d5ff4c1b28233ab8940f6422df1f1273/diff:/var/lib/docker/overlay2/d555005a13c251ef928389a1b06d251a17378cf3ec68af5a4d6c849412d3f69f/diff:/var/lib/docker/overlay2/80b8c06850dacfe2ca4db4e0fa4d2d0dd6997cf495a7f7da9b95a996694144c5/diff",
	                "MergedDir": "/var/lib/docker/overlay2/af793a78707e49005a668827ee182b67b74ca83491cbfb43256e792a6be931d8/merged",
	                "UpperDir": "/var/lib/docker/overlay2/af793a78707e49005a668827ee182b67b74ca83491cbfb43256e792a6be931d8/diff",
	                "WorkDir": "/var/lib/docker/overlay2/af793a78707e49005a668827ee182b67b74ca83491cbfb43256e792a6be931d8/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-20220601110327-6708",
	                "Source": "/var/lib/docker/volumes/embed-certs-20220601110327-6708/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-20220601110327-6708",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-20220601110327-6708",
	                "name.minikube.sigs.k8s.io": "embed-certs-20220601110327-6708",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "72ab588bc7e123d3b05f17bdda997b104506e595ecdeb222d14dd57971293f56",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49437"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49436"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49433"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49435"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49434"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/72ab588bc7e1",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-20220601110327-6708": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "b77a5d5e61bf",
	                        "embed-certs-20220601110327-6708"
	                    ],
	                    "NetworkID": "85c31b5e416e869b4ae1612c11e4fd39718a187a5009c211794c61313cb0c682",
	                    "EndpointID": "4966797cb9c652639f31bd37d26023d2cadd1e64690ba73eb6ab2fe001962d43",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-20220601110327-6708 -n embed-certs-20220601110327-6708
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-20220601110327-6708 logs -n 25
helpers_test.go:252: TestStartStop/group/embed-certs/serial/SecondStart logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------------------------------|------------------------------------------------|---------|----------------|---------------------|---------------------|
	| Command |                            Args                            |                    Profile                     |  User   |    Version     |     Start Time      |      End Time       |
	|---------|------------------------------------------------------------|------------------------------------------------|---------|----------------|---------------------|---------------------|
	| stop    | -p                                                         | old-k8s-version-20220601105850-6708            | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:11 UTC | 01 Jun 22 11:11 UTC |
	|         | old-k8s-version-20220601105850-6708                        |                                                |         |                |                     |                     |
	|         | --alsologtostderr -v=3                                     |                                                |         |                |                     |                     |
	| addons  | enable dashboard -p                                        | old-k8s-version-20220601105850-6708            | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:11 UTC | 01 Jun 22 11:11 UTC |
	|         | old-k8s-version-20220601105850-6708                        |                                                |         |                |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                |         |                |                     |                     |
	| logs    | calico-20220601104839-6708                                 | calico-20220601104839-6708                     | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:14 UTC | 01 Jun 22 11:14 UTC |
	|         | logs -n 25                                                 |                                                |         |                |                     |                     |
	| delete  | -p calico-20220601104839-6708                              | calico-20220601104839-6708                     | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:14 UTC | 01 Jun 22 11:14 UTC |
	| start   | -p newest-cni-20220601111420-6708 --memory=2200            | newest-cni-20220601111420-6708                 | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:14 UTC | 01 Jun 22 11:15 UTC |
	|         | --alsologtostderr --wait=apiserver,system_pods,default_sa  |                                                |         |                |                     |                     |
	|         | --feature-gates ServerSideApply=true --network-plugin=cni  |                                                |         |                |                     |                     |
	|         | --extra-config=kubelet.network-plugin=cni                  |                                                |         |                |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |                                                |         |                |                     |                     |
	|         | --driver=docker  --container-runtime=containerd            |                                                |         |                |                     |                     |
	|         | --kubernetes-version=v1.23.6                               |                                                |         |                |                     |                     |
	| addons  | enable metrics-server -p                                   | newest-cni-20220601111420-6708                 | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:15 UTC | 01 Jun 22 11:15 UTC |
	|         | newest-cni-20220601111420-6708                             |                                                |         |                |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                                                |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                     |                                                |         |                |                     |                     |
	| stop    | -p                                                         | newest-cni-20220601111420-6708                 | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:15 UTC | 01 Jun 22 11:15 UTC |
	|         | newest-cni-20220601111420-6708                             |                                                |         |                |                     |                     |
	|         | --alsologtostderr -v=3                                     |                                                |         |                |                     |                     |
	| addons  | enable dashboard -p                                        | newest-cni-20220601111420-6708                 | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:15 UTC | 01 Jun 22 11:15 UTC |
	|         | newest-cni-20220601111420-6708                             |                                                |         |                |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                |         |                |                     |                     |
	| start   | -p newest-cni-20220601111420-6708 --memory=2200            | newest-cni-20220601111420-6708                 | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:15 UTC | 01 Jun 22 11:15 UTC |
	|         | --alsologtostderr --wait=apiserver,system_pods,default_sa  |                                                |         |                |                     |                     |
	|         | --feature-gates ServerSideApply=true --network-plugin=cni  |                                                |         |                |                     |                     |
	|         | --extra-config=kubelet.network-plugin=cni                  |                                                |         |                |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |                                                |         |                |                     |                     |
	|         | --driver=docker  --container-runtime=containerd            |                                                |         |                |                     |                     |
	|         | --kubernetes-version=v1.23.6                               |                                                |         |                |                     |                     |
	| ssh     | -p                                                         | newest-cni-20220601111420-6708                 | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:15 UTC | 01 Jun 22 11:15 UTC |
	|         | newest-cni-20220601111420-6708                             |                                                |         |                |                     |                     |
	|         | sudo crictl images -o json                                 |                                                |         |                |                     |                     |
	| pause   | -p                                                         | newest-cni-20220601111420-6708                 | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:15 UTC | 01 Jun 22 11:15 UTC |
	|         | newest-cni-20220601111420-6708                             |                                                |         |                |                     |                     |
	|         | --alsologtostderr -v=1                                     |                                                |         |                |                     |                     |
	| unpause | -p                                                         | newest-cni-20220601111420-6708                 | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:15 UTC | 01 Jun 22 11:16 UTC |
	|         | newest-cni-20220601111420-6708                             |                                                |         |                |                     |                     |
	|         | --alsologtostderr -v=1                                     |                                                |         |                |                     |                     |
	| delete  | -p                                                         | newest-cni-20220601111420-6708                 | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:16 UTC | 01 Jun 22 11:16 UTC |
	|         | newest-cni-20220601111420-6708                             |                                                |         |                |                     |                     |
	| delete  | -p                                                         | newest-cni-20220601111420-6708                 | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:16 UTC | 01 Jun 22 11:16 UTC |
	|         | newest-cni-20220601111420-6708                             |                                                |         |                |                     |                     |
	| logs    | embed-certs-20220601110327-6708                            | embed-certs-20220601110327-6708                | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:16 UTC | 01 Jun 22 11:16 UTC |
	|         | logs -n 25                                                 |                                                |         |                |                     |                     |
	| logs    | embed-certs-20220601110327-6708                            | embed-certs-20220601110327-6708                | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:16 UTC | 01 Jun 22 11:16 UTC |
	|         | logs -n 25                                                 |                                                |         |                |                     |                     |
	| addons  | enable metrics-server -p                                   | embed-certs-20220601110327-6708                | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:16 UTC | 01 Jun 22 11:16 UTC |
	|         | embed-certs-20220601110327-6708                            |                                                |         |                |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                                                |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                     |                                                |         |                |                     |                     |
	| stop    | -p                                                         | embed-certs-20220601110327-6708                | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:16 UTC | 01 Jun 22 11:16 UTC |
	|         | embed-certs-20220601110327-6708                            |                                                |         |                |                     |                     |
	|         | --alsologtostderr -v=3                                     |                                                |         |                |                     |                     |
	| addons  | enable dashboard -p                                        | embed-certs-20220601110327-6708                | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:16 UTC | 01 Jun 22 11:16 UTC |
	|         | embed-certs-20220601110327-6708                            |                                                |         |                |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                |         |                |                     |                     |
	| logs    | default-k8s-different-port-20220601110654-6708             | default-k8s-different-port-20220601110654-6708 | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:19 UTC | 01 Jun 22 11:19 UTC |
	|         | logs -n 25                                                 |                                                |         |                |                     |                     |
	| logs    | default-k8s-different-port-20220601110654-6708             | default-k8s-different-port-20220601110654-6708 | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:19 UTC | 01 Jun 22 11:19 UTC |
	|         | logs -n 25                                                 |                                                |         |                |                     |                     |
	| addons  | enable metrics-server -p                                   | default-k8s-different-port-20220601110654-6708 | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:19 UTC | 01 Jun 22 11:19 UTC |
	|         | default-k8s-different-port-20220601110654-6708             |                                                |         |                |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                                                |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                     |                                                |         |                |                     |                     |
	| stop    | -p                                                         | default-k8s-different-port-20220601110654-6708 | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:19 UTC | 01 Jun 22 11:19 UTC |
	|         | default-k8s-different-port-20220601110654-6708             |                                                |         |                |                     |                     |
	|         | --alsologtostderr -v=3                                     |                                                |         |                |                     |                     |
	| addons  | enable dashboard -p                                        | default-k8s-different-port-20220601110654-6708 | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:19 UTC | 01 Jun 22 11:19 UTC |
	|         | default-k8s-different-port-20220601110654-6708             |                                                |         |                |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                |         |                |                     |                     |
	| logs    | old-k8s-version-20220601105850-6708                        | old-k8s-version-20220601105850-6708            | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:21 UTC | 01 Jun 22 11:21 UTC |
	|         | logs -n 25                                                 |                                                |         |                |                     |                     |
	|---------|------------------------------------------------------------|------------------------------------------------|---------|----------------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/06/01 11:19:52
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.18.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0601 11:19:52.827023  276679 out.go:296] Setting OutFile to fd 1 ...
	I0601 11:19:52.827225  276679 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0601 11:19:52.827237  276679 out.go:309] Setting ErrFile to fd 2...
	I0601 11:19:52.827242  276679 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0601 11:19:52.827359  276679 root.go:322] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/bin
	I0601 11:19:52.827588  276679 out.go:303] Setting JSON to false
	I0601 11:19:52.828890  276679 start.go:115] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":3747,"bootTime":1654078646,"procs":456,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.13.0-1027-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0601 11:19:52.828955  276679 start.go:125] virtualization: kvm guest
	I0601 11:19:52.831944  276679 out.go:177] * [default-k8s-different-port-20220601110654-6708] minikube v1.26.0-beta.1 on Ubuntu 20.04 (kvm/amd64)
	I0601 11:19:52.833439  276679 out.go:177]   - MINIKUBE_LOCATION=14079
	I0601 11:19:52.833372  276679 notify.go:193] Checking for updates...
	I0601 11:19:52.835007  276679 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0601 11:19:52.836578  276679 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig
	I0601 11:19:52.837966  276679 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube
	I0601 11:19:52.839440  276679 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0601 11:19:52.841215  276679 config.go:178] Loaded profile config "default-k8s-different-port-20220601110654-6708": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.6
	I0601 11:19:52.841578  276679 driver.go:358] Setting default libvirt URI to qemu:///system
	I0601 11:19:52.880823  276679 docker.go:137] docker version: linux-20.10.16
	I0601 11:19:52.880897  276679 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0601 11:19:52.978177  276679 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:76 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:44 OomKillDisable:true NGoroutines:44 SystemTime:2022-06-01 11:19:52.908721136 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1027-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662795776 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:20.10.16 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0601 11:19:52.978275  276679 docker.go:254] overlay module found
	I0601 11:19:52.981078  276679 out.go:177] * Using the docker driver based on existing profile
	I0601 11:19:52.982316  276679 start.go:284] selected driver: docker
	I0601 11:19:52.982326  276679 start.go:806] validating driver "docker" against &{Name:default-k8s-different-port-20220601110654-6708 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:default-k8s-different-port-
20220601110654-6708 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8444 KubernetesVersion:v1.23.6 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.5.1@sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true
system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0601 11:19:52.982412  276679 start.go:817] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0601 11:19:52.983242  276679 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0601 11:19:53.085320  276679 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:76 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:44 OomKillDisable:true NGoroutines:44 SystemTime:2022-06-01 11:19:53.012439643 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1027-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662795776 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:20.10.16 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0601 11:19:53.085561  276679 start_flags.go:847] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0601 11:19:53.085581  276679 cni.go:95] Creating CNI manager for ""
	I0601 11:19:53.085589  276679 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0601 11:19:53.085608  276679 start_flags.go:306] config:
	{Name:default-k8s-different-port-20220601110654-6708 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:default-k8s-different-port-20220601110654-6708 Namespace:default APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8444 KubernetesVersion:v1.23.6 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.5.1@sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] Lis
tenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0601 11:19:53.088575  276679 out.go:177] * Starting control plane node default-k8s-different-port-20220601110654-6708 in cluster default-k8s-different-port-20220601110654-6708
	I0601 11:19:53.089964  276679 cache.go:120] Beginning downloading kic base image for docker with containerd
	I0601 11:19:53.091501  276679 out.go:177] * Pulling base image ...
	I0601 11:19:53.092800  276679 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime containerd
	I0601 11:19:53.092839  276679 preload.go:148] Found local preload: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.6-containerd-overlay2-amd64.tar.lz4
	I0601 11:19:53.092856  276679 cache.go:57] Caching tarball of preloaded images
	I0601 11:19:53.092897  276679 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a in local docker daemon
	I0601 11:19:53.093061  276679 preload.go:174] Found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.6-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0601 11:19:53.093076  276679 cache.go:60] Finished verifying existence of preloaded tar for  v1.23.6 on containerd
	I0601 11:19:53.093182  276679 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/default-k8s-different-port-20220601110654-6708/config.json ...
	I0601 11:19:53.136384  276679 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a in local docker daemon, skipping pull
	I0601 11:19:53.136410  276679 cache.go:141] gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a exists in daemon, skipping load
	I0601 11:19:53.136424  276679 cache.go:206] Successfully downloaded all kic artifacts
	I0601 11:19:53.136454  276679 start.go:352] acquiring machines lock for default-k8s-different-port-20220601110654-6708: {Name:mk7500f636009412c286b3a5b3a2182fb6b229b2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0601 11:19:53.136550  276679 start.go:356] acquired machines lock for "default-k8s-different-port-20220601110654-6708" in 69.025µs
	I0601 11:19:53.136570  276679 start.go:94] Skipping create...Using existing machine configuration
	I0601 11:19:53.136577  276679 fix.go:55] fixHost starting: 
	I0601 11:19:53.137208  276679 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220601110654-6708 --format={{.State.Status}}
	I0601 11:19:53.168642  276679 fix.go:103] recreateIfNeeded on default-k8s-different-port-20220601110654-6708: state=Stopped err=<nil>
	W0601 11:19:53.168681  276679 fix.go:129] unexpected machine state, will restart: <nil>
	I0601 11:19:53.170972  276679 out.go:177] * Restarting existing docker container for "default-k8s-different-port-20220601110654-6708" ...
	I0601 11:19:50.719789  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:19:53.220276  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:19:53.243194  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:19:55.243470  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:19:53.172500  276679 cli_runner.go:164] Run: docker start default-k8s-different-port-20220601110654-6708
	I0601 11:19:53.580842  276679 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220601110654-6708 --format={{.State.Status}}
	I0601 11:19:53.615796  276679 kic.go:416] container "default-k8s-different-port-20220601110654-6708" state is running.
	I0601 11:19:53.616193  276679 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-different-port-20220601110654-6708
	I0601 11:19:53.647308  276679 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/default-k8s-different-port-20220601110654-6708/config.json ...
	I0601 11:19:53.647503  276679 machine.go:88] provisioning docker machine ...
	I0601 11:19:53.647526  276679 ubuntu.go:169] provisioning hostname "default-k8s-different-port-20220601110654-6708"
	I0601 11:19:53.647560  276679 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220601110654-6708
	I0601 11:19:53.679842  276679 main.go:134] libmachine: Using SSH client type: native
	I0601 11:19:53.680106  276679 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7da240] 0x7dd2a0 <nil>  [] 0s} 127.0.0.1 49442 <nil> <nil>}
	I0601 11:19:53.680131  276679 main.go:134] libmachine: About to run SSH command:
	sudo hostname default-k8s-different-port-20220601110654-6708 && echo "default-k8s-different-port-20220601110654-6708" | sudo tee /etc/hostname
	I0601 11:19:53.680742  276679 main.go:134] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:55946->127.0.0.1:49442: read: connection reset by peer
	I0601 11:19:56.807880  276679 main.go:134] libmachine: SSH cmd err, output: <nil>: default-k8s-different-port-20220601110654-6708
	
	I0601 11:19:56.807951  276679 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220601110654-6708
	I0601 11:19:56.839321  276679 main.go:134] libmachine: Using SSH client type: native
	I0601 11:19:56.839475  276679 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7da240] 0x7dd2a0 <nil>  [] 0s} 127.0.0.1 49442 <nil> <nil>}
	I0601 11:19:56.839510  276679 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-different-port-20220601110654-6708' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-different-port-20220601110654-6708/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-different-port-20220601110654-6708' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0601 11:19:56.951445  276679 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0601 11:19:56.951473  276679 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/c
erts/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube}
	I0601 11:19:56.951491  276679 ubuntu.go:177] setting up certificates
	I0601 11:19:56.951499  276679 provision.go:83] configureAuth start
	I0601 11:19:56.951539  276679 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-different-port-20220601110654-6708
	I0601 11:19:56.982392  276679 provision.go:138] copyHostCerts
	I0601 11:19:56.982451  276679 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.pem, removing ...
	I0601 11:19:56.982464  276679 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.pem
	I0601 11:19:56.982537  276679 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.pem (1082 bytes)
	I0601 11:19:56.982652  276679 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cert.pem, removing ...
	I0601 11:19:56.982664  276679 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cert.pem
	I0601 11:19:56.982697  276679 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cert.pem (1123 bytes)
	I0601 11:19:56.982789  276679 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/key.pem, removing ...
	I0601 11:19:56.982802  276679 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/key.pem
	I0601 11:19:56.982829  276679 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/key.pem (1679 bytes)
	I0601 11:19:56.982876  276679 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca-key.pem org=jenkins.default-k8s-different-port-20220601110654-6708 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube default-k8s-different-port-20220601110654-6708]
	I0601 11:19:57.067574  276679 provision.go:172] copyRemoteCerts
	I0601 11:19:57.067626  276679 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0601 11:19:57.067654  276679 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220601110654-6708
	I0601 11:19:57.098669  276679 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49442 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/default-k8s-different-port-20220601110654-6708/id_rsa Username:docker}
	I0601 11:19:57.182904  276679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0601 11:19:57.199734  276679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/server.pem --> /etc/docker/server.pem (1306 bytes)
	I0601 11:19:57.215838  276679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0601 11:19:57.232284  276679 provision.go:86] duration metric: configureAuth took 280.774927ms
	I0601 11:19:57.232312  276679 ubuntu.go:193] setting minikube options for container-runtime
	I0601 11:19:57.232468  276679 config.go:178] Loaded profile config "default-k8s-different-port-20220601110654-6708": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.6
	I0601 11:19:57.232480  276679 machine.go:91] provisioned docker machine in 3.584963826s
	I0601 11:19:57.232486  276679 start.go:306] post-start starting for "default-k8s-different-port-20220601110654-6708" (driver="docker")
	I0601 11:19:57.232492  276679 start.go:316] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0601 11:19:57.232530  276679 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0601 11:19:57.232572  276679 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220601110654-6708
	I0601 11:19:57.265048  276679 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49442 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/default-k8s-different-port-20220601110654-6708/id_rsa Username:docker}
	I0601 11:19:57.351029  276679 ssh_runner.go:195] Run: cat /etc/os-release
	I0601 11:19:57.353646  276679 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0601 11:19:57.353677  276679 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0601 11:19:57.353687  276679 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0601 11:19:57.353695  276679 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0601 11:19:57.353706  276679 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/addons for local assets ...
	I0601 11:19:57.353765  276679 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files for local assets ...
	I0601 11:19:57.353858  276679 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files/etc/ssl/certs/67082.pem -> 67082.pem in /etc/ssl/certs
	I0601 11:19:57.353951  276679 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0601 11:19:57.360153  276679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files/etc/ssl/certs/67082.pem --> /etc/ssl/certs/67082.pem (1708 bytes)
	I0601 11:19:57.376881  276679 start.go:309] post-start completed in 144.384989ms
	I0601 11:19:57.376932  276679 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0601 11:19:57.376962  276679 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220601110654-6708
	I0601 11:19:57.411118  276679 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49442 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/default-k8s-different-port-20220601110654-6708/id_rsa Username:docker}
	I0601 11:19:57.496188  276679 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0601 11:19:57.499982  276679 fix.go:57] fixHost completed within 4.363400058s
	I0601 11:19:57.500005  276679 start.go:81] releasing machines lock for "default-k8s-different-port-20220601110654-6708", held for 4.363442227s
	I0601 11:19:57.500082  276679 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-different-port-20220601110654-6708
	I0601 11:19:57.532057  276679 ssh_runner.go:195] Run: systemctl --version
	I0601 11:19:57.532107  276679 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220601110654-6708
	I0601 11:19:57.532107  276679 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0601 11:19:57.532168  276679 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220601110654-6708
	I0601 11:19:57.567039  276679 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49442 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/default-k8s-different-port-20220601110654-6708/id_rsa Username:docker}
	I0601 11:19:57.567550  276679 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49442 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/default-k8s-different-port-20220601110654-6708/id_rsa Username:docker}
	I0601 11:19:57.677865  276679 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0601 11:19:57.688848  276679 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0601 11:19:57.697588  276679 docker.go:187] disabling docker service ...
	I0601 11:19:57.697632  276679 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0601 11:19:57.706476  276679 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0601 11:19:57.714826  276679 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0601 11:19:57.791919  276679 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0601 11:19:55.719582  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:19:58.219607  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:19:57.743387  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:20:00.243011  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:19:57.865357  276679 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0601 11:19:57.874183  276679 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0601 11:19:57.886120  276679 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*sandbox_image = .*$|sandbox_image = "k8s.gcr.io/pause:3.6"|' -i /etc/containerd/config.toml"
	I0601 11:19:57.893706  276679 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*restrict_oom_score_adj = .*$|restrict_oom_score_adj = false|' -i /etc/containerd/config.toml"
	I0601 11:19:57.901159  276679 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*SystemdCgroup = .*$|SystemdCgroup = false|' -i /etc/containerd/config.toml"
	I0601 11:19:57.908873  276679 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*conf_dir = .*$|conf_dir = "/etc/cni/net.mk"|' -i /etc/containerd/config.toml"
	I0601 11:19:57.916512  276679 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^# imports|imports = ["/etc/containerd/containerd.conf.d/02-containerd.conf"]|' -i /etc/containerd/config.toml"
	I0601 11:19:57.923712  276679 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc/containerd/containerd.conf.d && printf %!s(MISSING) "dmVyc2lvbiA9IDIK" | base64 -d | sudo tee /etc/containerd/containerd.conf.d/02-containerd.conf"
	I0601 11:19:57.935738  276679 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0601 11:19:57.941802  276679 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0601 11:19:57.947777  276679 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0601 11:19:58.021579  276679 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0601 11:19:58.089337  276679 start.go:447] Will wait 60s for socket path /run/containerd/containerd.sock
	I0601 11:19:58.089424  276679 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0601 11:19:58.092751  276679 start.go:468] Will wait 60s for crictl version
	I0601 11:19:58.092798  276679 ssh_runner.go:195] Run: sudo crictl version
	I0601 11:19:58.116611  276679 retry.go:31] will retry after 11.04660288s: Temporary Error: sudo crictl version: Process exited with status 1
	stdout:
	
	stderr:
	time="2022-06-01T11:19:58Z" level=fatal msg="getting the runtime version: rpc error: code = Unknown desc = server is not initialized yet"
	I0601 11:20:00.719494  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:20:03.219487  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:20:02.243060  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:20:04.243463  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:20:06.244423  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:20:05.719159  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:20:07.719735  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:20:09.163975  276679 ssh_runner.go:195] Run: sudo crictl version
	I0601 11:20:09.186613  276679 start.go:477] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.6.4
	RuntimeApiVersion:  v1alpha2
	I0601 11:20:09.186676  276679 ssh_runner.go:195] Run: containerd --version
	I0601 11:20:09.214385  276679 ssh_runner.go:195] Run: containerd --version
	I0601 11:20:09.243587  276679 out.go:177] * Preparing Kubernetes v1.23.6 on containerd 1.6.4 ...
	I0601 11:20:09.245245  276679 cli_runner.go:164] Run: docker network inspect default-k8s-different-port-20220601110654-6708 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0601 11:20:09.276501  276679 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0601 11:20:09.279800  276679 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0601 11:20:09.290992  276679 out.go:177]   - kubelet.cni-conf-dir=/etc/cni/net.mk
	I0601 11:20:08.742836  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:20:11.242670  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:20:09.292426  276679 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime containerd
	I0601 11:20:09.292493  276679 ssh_runner.go:195] Run: sudo crictl images --output json
	I0601 11:20:09.315170  276679 containerd.go:547] all images are preloaded for containerd runtime.
	I0601 11:20:09.315189  276679 containerd.go:461] Images already preloaded, skipping extraction
	I0601 11:20:09.315224  276679 ssh_runner.go:195] Run: sudo crictl images --output json
	I0601 11:20:09.338119  276679 containerd.go:547] all images are preloaded for containerd runtime.
	I0601 11:20:09.338137  276679 cache_images.go:84] Images are preloaded, skipping loading
	I0601 11:20:09.338184  276679 ssh_runner.go:195] Run: sudo crictl info
	I0601 11:20:09.360773  276679 cni.go:95] Creating CNI manager for ""
	I0601 11:20:09.360799  276679 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0601 11:20:09.360817  276679 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0601 11:20:09.360831  276679 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8444 KubernetesVersion:v1.23.6 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-different-port-20220601110654-6708 NodeName:default-k8s-different-port-20220601110654-6708 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.49.2
CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0601 11:20:09.360999  276679 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "default-k8s-different-port-20220601110654-6708"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.23.6
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0601 11:20:09.361105  276679 kubeadm.go:961] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.23.6/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cni-conf-dir=/etc/cni/net.mk --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=default-k8s-different-port-20220601110654-6708 --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.49.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.23.6 ClusterName:default-k8s-different-port-20220601110654-6708 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:}
	I0601 11:20:09.361162  276679 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.23.6
	I0601 11:20:09.368101  276679 binaries.go:44] Found k8s binaries, skipping transfer
	I0601 11:20:09.368169  276679 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0601 11:20:09.374382  276679 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (591 bytes)
	I0601 11:20:09.386282  276679 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0601 11:20:09.398188  276679 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2075 bytes)
	I0601 11:20:09.409736  276679 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0601 11:20:09.412458  276679 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0601 11:20:09.420789  276679 certs.go:54] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/default-k8s-different-port-20220601110654-6708 for IP: 192.168.49.2
	I0601 11:20:09.420897  276679 certs.go:182] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.key
	I0601 11:20:09.420940  276679 certs.go:182] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/proxy-client-ca.key
	I0601 11:20:09.421000  276679 certs.go:298] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/default-k8s-different-port-20220601110654-6708/client.key
	I0601 11:20:09.421053  276679 certs.go:298] skipping minikube signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/default-k8s-different-port-20220601110654-6708/apiserver.key.dd3b5fb2
	I0601 11:20:09.421088  276679 certs.go:298] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/default-k8s-different-port-20220601110654-6708/proxy-client.key
	I0601 11:20:09.421176  276679 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/6708.pem (1338 bytes)
	W0601 11:20:09.421205  276679 certs.go:384] ignoring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/6708_empty.pem, impossibly tiny 0 bytes
	I0601 11:20:09.421216  276679 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca-key.pem (1679 bytes)
	I0601 11:20:09.421244  276679 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca.pem (1082 bytes)
	I0601 11:20:09.421270  276679 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/cert.pem (1123 bytes)
	I0601 11:20:09.421298  276679 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/key.pem (1679 bytes)
	I0601 11:20:09.421334  276679 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files/etc/ssl/certs/67082.pem (1708 bytes)
	I0601 11:20:09.421917  276679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/default-k8s-different-port-20220601110654-6708/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0601 11:20:09.438490  276679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/default-k8s-different-port-20220601110654-6708/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0601 11:20:09.454711  276679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/default-k8s-different-port-20220601110654-6708/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0601 11:20:09.471469  276679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/default-k8s-different-port-20220601110654-6708/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0601 11:20:09.488271  276679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0601 11:20:09.504375  276679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0601 11:20:09.520473  276679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0601 11:20:09.536663  276679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0601 11:20:09.552725  276679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0601 11:20:09.568724  276679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/6708.pem --> /usr/share/ca-certificates/6708.pem (1338 bytes)
	I0601 11:20:09.584711  276679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files/etc/ssl/certs/67082.pem --> /usr/share/ca-certificates/67082.pem (1708 bytes)
	I0601 11:20:09.600406  276679 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0601 11:20:09.611814  276679 ssh_runner.go:195] Run: openssl version
	I0601 11:20:09.616280  276679 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0601 11:20:09.623058  276679 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0601 11:20:09.625881  276679 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Jun  1 10:20 /usr/share/ca-certificates/minikubeCA.pem
	I0601 11:20:09.625913  276679 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0601 11:20:09.630367  276679 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0601 11:20:09.636712  276679 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6708.pem && ln -fs /usr/share/ca-certificates/6708.pem /etc/ssl/certs/6708.pem"
	I0601 11:20:09.643407  276679 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6708.pem
	I0601 11:20:09.646316  276679 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Jun  1 10:24 /usr/share/ca-certificates/6708.pem
	I0601 11:20:09.646366  276679 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6708.pem
	I0601 11:20:09.650791  276679 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6708.pem /etc/ssl/certs/51391683.0"
	I0601 11:20:09.657126  276679 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/67082.pem && ln -fs /usr/share/ca-certificates/67082.pem /etc/ssl/certs/67082.pem"
	I0601 11:20:09.663990  276679 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/67082.pem
	I0601 11:20:09.666934  276679 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Jun  1 10:24 /usr/share/ca-certificates/67082.pem
	I0601 11:20:09.666966  276679 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/67082.pem
	I0601 11:20:09.671359  276679 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/67082.pem /etc/ssl/certs/3ec20f2e.0"
	I0601 11:20:09.677573  276679 kubeadm.go:395] StartCluster: {Name:default-k8s-different-port-20220601110654-6708 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:default-k8s-different-port-20220601110654-6708
Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8444 KubernetesVersion:v1.23.6 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.5.1@sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] S
tartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0601 11:20:09.677668  276679 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0601 11:20:09.677695  276679 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0601 11:20:09.700805  276679 cri.go:87] found id: "fec5fcb4aee4a111d94d448605edbdddd36bb33e1c2c06fad87be11f5157efdd"
	I0601 11:20:09.700825  276679 cri.go:87] found id: "313035e9674ffbf03bc7f81b4786fb43b0ffff7b5720ab0e88a6bb8f52d6087d"
	I0601 11:20:09.700835  276679 cri.go:87] found id: "f9746f111b56ac0244039e7b095ebf29999ccb20c1bf5e1ba484273bdb9e0d90"
	I0601 11:20:09.700844  276679 cri.go:87] found id: "0b15aeee4f5511a740a4f3f9ebbba9758b6b511b072da616c68a21c118c7790e"
	I0601 11:20:09.700853  276679 cri.go:87] found id: "627fd5c08820c1666f69a50ff3cc02a6eee73709048b46a0243143bf89dde787"
	I0601 11:20:09.700863  276679 cri.go:87] found id: "6ce85ae821e03fdd8bd07541eda8ea822ee62a21dc41c68bec58c5695d43fb44"
	I0601 11:20:09.700870  276679 cri.go:87] found id: ""
	I0601 11:20:09.700900  276679 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0601 11:20:09.711953  276679 cri.go:114] JSON = null
	W0601 11:20:09.711995  276679 kubeadm.go:402] unpause failed: list paused: list returned 0 containers, but ps returned 6
	I0601 11:20:09.712052  276679 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0601 11:20:09.718628  276679 kubeadm.go:410] found existing configuration files, will attempt cluster restart
	I0601 11:20:09.718649  276679 kubeadm.go:626] restartCluster start
	I0601 11:20:09.718687  276679 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0601 11:20:09.724992  276679 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0601 11:20:09.725567  276679 kubeconfig.go:116] verify returned: extract IP: "default-k8s-different-port-20220601110654-6708" does not appear in /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig
	I0601 11:20:09.725941  276679 kubeconfig.go:127] "default-k8s-different-port-20220601110654-6708" context is missing from /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig - will repair!
	I0601 11:20:09.726552  276679 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig: {Name:mkb0e16236e54fbc8651999f1dd70854c53de7db Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0601 11:20:09.727803  276679 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0601 11:20:09.734151  276679 api_server.go:165] Checking apiserver status ...
	I0601 11:20:09.734186  276679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 11:20:09.741699  276679 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 11:20:09.942065  276679 api_server.go:165] Checking apiserver status ...
	I0601 11:20:09.942125  276679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 11:20:09.950479  276679 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 11:20:10.142775  276679 api_server.go:165] Checking apiserver status ...
	I0601 11:20:10.142860  276679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 11:20:10.151184  276679 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 11:20:10.342428  276679 api_server.go:165] Checking apiserver status ...
	I0601 11:20:10.342511  276679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 11:20:10.350942  276679 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 11:20:10.542230  276679 api_server.go:165] Checking apiserver status ...
	I0601 11:20:10.542324  276679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 11:20:10.550731  276679 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 11:20:10.741765  276679 api_server.go:165] Checking apiserver status ...
	I0601 11:20:10.741840  276679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 11:20:10.750184  276679 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 11:20:10.942518  276679 api_server.go:165] Checking apiserver status ...
	I0601 11:20:10.942589  276679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 11:20:10.951137  276679 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 11:20:11.142442  276679 api_server.go:165] Checking apiserver status ...
	I0601 11:20:11.142519  276679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 11:20:11.151332  276679 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 11:20:11.342632  276679 api_server.go:165] Checking apiserver status ...
	I0601 11:20:11.342693  276679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 11:20:11.351149  276679 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 11:20:11.542423  276679 api_server.go:165] Checking apiserver status ...
	I0601 11:20:11.542483  276679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 11:20:11.550625  276679 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 11:20:11.741869  276679 api_server.go:165] Checking apiserver status ...
	I0601 11:20:11.741945  276679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 11:20:11.750554  276679 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 11:20:11.942776  276679 api_server.go:165] Checking apiserver status ...
	I0601 11:20:11.942855  276679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 11:20:11.951226  276679 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 11:20:12.142534  276679 api_server.go:165] Checking apiserver status ...
	I0601 11:20:12.142617  276679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 11:20:12.151065  276679 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 11:20:12.342354  276679 api_server.go:165] Checking apiserver status ...
	I0601 11:20:12.342429  276679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 11:20:12.350855  276679 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 11:20:12.542142  276679 api_server.go:165] Checking apiserver status ...
	I0601 11:20:12.542207  276679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 11:20:12.550615  276679 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 11:20:12.741824  276679 api_server.go:165] Checking apiserver status ...
	I0601 11:20:12.741894  276679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 11:20:12.750511  276679 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 11:20:12.750537  276679 api_server.go:165] Checking apiserver status ...
	I0601 11:20:12.750569  276679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 11:20:12.758099  276679 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 11:20:12.758124  276679 kubeadm.go:601] needs reconfigure: apiserver error: timed out waiting for the condition
	I0601 11:20:12.758131  276679 kubeadm.go:1092] stopping kube-system containers ...
	I0601 11:20:12.758146  276679 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I0601 11:20:12.758196  276679 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0601 11:20:12.782896  276679 cri.go:87] found id: "fec5fcb4aee4a111d94d448605edbdddd36bb33e1c2c06fad87be11f5157efdd"
	I0601 11:20:12.782918  276679 cri.go:87] found id: "313035e9674ffbf03bc7f81b4786fb43b0ffff7b5720ab0e88a6bb8f52d6087d"
	I0601 11:20:12.782924  276679 cri.go:87] found id: "f9746f111b56ac0244039e7b095ebf29999ccb20c1bf5e1ba484273bdb9e0d90"
	I0601 11:20:12.782931  276679 cri.go:87] found id: "0b15aeee4f5511a740a4f3f9ebbba9758b6b511b072da616c68a21c118c7790e"
	I0601 11:20:12.782936  276679 cri.go:87] found id: "627fd5c08820c1666f69a50ff3cc02a6eee73709048b46a0243143bf89dde787"
	I0601 11:20:12.782943  276679 cri.go:87] found id: "6ce85ae821e03fdd8bd07541eda8ea822ee62a21dc41c68bec58c5695d43fb44"
	I0601 11:20:12.782948  276679 cri.go:87] found id: ""
	I0601 11:20:12.782955  276679 cri.go:232] Stopping containers: [fec5fcb4aee4a111d94d448605edbdddd36bb33e1c2c06fad87be11f5157efdd 313035e9674ffbf03bc7f81b4786fb43b0ffff7b5720ab0e88a6bb8f52d6087d f9746f111b56ac0244039e7b095ebf29999ccb20c1bf5e1ba484273bdb9e0d90 0b15aeee4f5511a740a4f3f9ebbba9758b6b511b072da616c68a21c118c7790e 627fd5c08820c1666f69a50ff3cc02a6eee73709048b46a0243143bf89dde787 6ce85ae821e03fdd8bd07541eda8ea822ee62a21dc41c68bec58c5695d43fb44]
	I0601 11:20:12.782994  276679 ssh_runner.go:195] Run: which crictl
	I0601 11:20:12.785799  276679 ssh_runner.go:195] Run: sudo /usr/bin/crictl stop fec5fcb4aee4a111d94d448605edbdddd36bb33e1c2c06fad87be11f5157efdd 313035e9674ffbf03bc7f81b4786fb43b0ffff7b5720ab0e88a6bb8f52d6087d f9746f111b56ac0244039e7b095ebf29999ccb20c1bf5e1ba484273bdb9e0d90 0b15aeee4f5511a740a4f3f9ebbba9758b6b511b072da616c68a21c118c7790e 627fd5c08820c1666f69a50ff3cc02a6eee73709048b46a0243143bf89dde787 6ce85ae821e03fdd8bd07541eda8ea822ee62a21dc41c68bec58c5695d43fb44
	I0601 11:20:12.809504  276679 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0601 11:20:12.819061  276679 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0601 11:20:12.825913  276679 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5643 Jun  1 11:07 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5656 Jun  1 11:07 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2123 Jun  1 11:07 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5604 Jun  1 11:07 /etc/kubernetes/scheduler.conf
	
	I0601 11:20:12.825968  276679 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0601 11:20:10.219173  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:20:12.219371  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:20:13.243691  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:20:15.243798  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:20:12.832916  276679 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0601 11:20:12.839178  276679 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0601 11:20:12.845567  276679 kubeadm.go:166] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0601 11:20:12.845605  276679 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0601 11:20:12.851603  276679 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0601 11:20:12.857919  276679 kubeadm.go:166] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0601 11:20:12.857967  276679 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0601 11:20:12.864112  276679 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0601 11:20:12.870523  276679 kubeadm.go:703] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0601 11:20:12.870540  276679 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0601 11:20:12.912381  276679 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0601 11:20:13.433508  276679 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0601 11:20:13.566844  276679 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0601 11:20:13.617762  276679 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0601 11:20:13.686212  276679 api_server.go:51] waiting for apiserver process to appear ...
	I0601 11:20:13.686269  276679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:20:14.195273  276679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:20:14.695296  276679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:20:15.195457  276679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:20:15.695544  276679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:20:16.195542  276679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:20:16.695465  276679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:20:17.195333  276679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:20:17.694666  276679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:20:14.719337  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:20:17.218953  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:20:17.742741  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:20:20.244002  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:20:18.194692  276679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:20:18.694918  276679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:20:19.195623  276679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:20:19.695137  276679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:20:19.758656  276679 api_server.go:71] duration metric: took 6.072444993s to wait for apiserver process to appear ...
	I0601 11:20:19.758687  276679 api_server.go:87] waiting for apiserver healthz status ...
	I0601 11:20:19.758700  276679 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8444/healthz ...
	I0601 11:20:22.369047  276679 api_server.go:266] https://192.168.49.2:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0601 11:20:22.369078  276679 api_server.go:102] status: https://192.168.49.2:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0601 11:20:19.718920  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:20:21.719314  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:20:23.719804  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:20:22.869917  276679 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8444/healthz ...
	I0601 11:20:22.874561  276679 api_server.go:266] https://192.168.49.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0601 11:20:22.874589  276679 api_server.go:102] status: https://192.168.49.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0601 11:20:23.370203  276679 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8444/healthz ...
	I0601 11:20:23.375048  276679 api_server.go:266] https://192.168.49.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0601 11:20:23.375073  276679 api_server.go:102] status: https://192.168.49.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0601 11:20:23.869242  276679 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8444/healthz ...
	I0601 11:20:23.874012  276679 api_server.go:266] https://192.168.49.2:8444/healthz returned 200:
	ok
	I0601 11:20:23.879941  276679 api_server.go:140] control plane version: v1.23.6
	I0601 11:20:23.879963  276679 api_server.go:130] duration metric: took 4.121269797s to wait for apiserver health ...
	I0601 11:20:23.879972  276679 cni.go:95] Creating CNI manager for ""
	I0601 11:20:23.879977  276679 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0601 11:20:23.882052  276679 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0601 11:20:22.743507  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:20:25.242700  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:20:23.883460  276679 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0601 11:20:23.886921  276679 cni.go:189] applying CNI manifest using /var/lib/minikube/binaries/v1.23.6/kubectl ...
	I0601 11:20:23.886945  276679 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0601 11:20:23.899955  276679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0601 11:20:24.544438  276679 system_pods.go:43] waiting for kube-system pods to appear ...
	I0601 11:20:24.550979  276679 system_pods.go:59] 9 kube-system pods found
	I0601 11:20:24.551015  276679 system_pods.go:61] "coredns-64897985d-9gcj2" [28e98fca-a88b-422d-9f4b-797b18a8ff7a] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
	I0601 11:20:24.551025  276679 system_pods.go:61] "etcd-default-k8s-different-port-20220601110654-6708" [3005e651-1349-4d5e-b06f-e0fac3064ccf] Running
	I0601 11:20:24.551035  276679 system_pods.go:61] "kindnet-7fspq" [eefcd8e6-51e4-4d48-a420-93f4b47cf732] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0601 11:20:24.551042  276679 system_pods.go:61] "kube-apiserver-default-k8s-different-port-20220601110654-6708" [974fafdd-9176-4d97-acd7-9874d63b4987] Running
	I0601 11:20:24.551053  276679 system_pods.go:61] "kube-controller-manager-default-k8s-different-port-20220601110654-6708" [38b2c1a1-9a1a-4a1f-9fac-904e47d545be] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0601 11:20:24.551066  276679 system_pods.go:61] "kube-proxy-slzcl" [a1a6237f-6142-4e31-8bd4-72afd4f8a4c8] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0601 11:20:24.551083  276679 system_pods.go:61] "kube-scheduler-default-k8s-different-port-20220601110654-6708" [42ce6176-36e5-46bc-a443-19e4ca958785] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0601 11:20:24.551092  276679 system_pods.go:61] "metrics-server-b955d9d8-2k9wk" [fbc457b5-c359-4b84-abe5-d488874181f4] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
	I0601 11:20:24.551102  276679 system_pods.go:61] "storage-provisioner" [48086474-3417-47ff-970d-f7cf7806983b] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
	I0601 11:20:24.551112  276679 system_pods.go:74] duration metric: took 6.652373ms to wait for pod list to return data ...
	I0601 11:20:24.551126  276679 node_conditions.go:102] verifying NodePressure condition ...
	I0601 11:20:24.553819  276679 node_conditions.go:122] node storage ephemeral capacity is 304695084Ki
	I0601 11:20:24.553843  276679 node_conditions.go:123] node cpu capacity is 8
	I0601 11:20:24.553854  276679 node_conditions.go:105] duration metric: took 2.721044ms to run NodePressure ...
	I0601 11:20:24.553869  276679 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0601 11:20:24.680194  276679 kubeadm.go:762] waiting for restarted kubelet to initialise ...
	I0601 11:20:24.683686  276679 kubeadm.go:777] kubelet initialised
	I0601 11:20:24.683708  276679 kubeadm.go:778] duration metric: took 3.487172ms waiting for restarted kubelet to initialise ...
	I0601 11:20:24.683715  276679 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0601 11:20:24.689167  276679 pod_ready.go:78] waiting up to 4m0s for pod "coredns-64897985d-9gcj2" in "kube-system" namespace to be "Ready" ...
	I0601 11:20:26.694484  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:20:26.219205  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:20:28.219317  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:20:27.243486  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:20:29.742717  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:20:31.742800  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:20:28.695017  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:20:30.695110  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:20:32.695566  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:20:30.219646  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:20:32.719074  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:20:34.242643  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:20:36.243891  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:20:35.195305  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:20:37.197596  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:20:35.219473  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:20:37.719336  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:20:38.243963  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:20:40.743349  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:20:39.695270  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:20:42.195160  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:20:40.218932  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:20:42.719276  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:20:42.743398  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:20:45.243686  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:20:44.694661  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:20:46.695274  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:20:45.219350  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:20:47.719698  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:20:47.742813  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:20:50.244047  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:20:48.696514  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:20:51.195247  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:20:50.218967  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:20:52.219422  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:20:52.743394  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:20:54.743515  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:20:53.694370  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:20:55.694640  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:20:57.695171  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:20:54.719514  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:20:57.219033  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:20:57.242819  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:20:59.243739  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:20:59.739945  270029 pod_ready.go:81] duration metric: took 4m0.002166585s waiting for pod "coredns-64897985d-9dpfv" in "kube-system" namespace to be "Ready" ...
	E0601 11:20:59.739968  270029 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "coredns-64897985d-9dpfv" in "kube-system" namespace to be "Ready" (will not retry!)
	I0601 11:20:59.739995  270029 pod_ready.go:38] duration metric: took 4m0.008917217s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0601 11:20:59.740018  270029 kubeadm.go:630] restartCluster took 4m15.707393707s
	W0601 11:20:59.740131  270029 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0601 11:20:59.740156  270029 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I0601 11:21:01.430762  270029 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force": (1.690579833s)
	I0601 11:21:01.430838  270029 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0601 11:21:01.440364  270029 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0601 11:21:01.447145  270029 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0601 11:21:01.447194  270029 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0601 11:21:01.453852  270029 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0601 11:21:01.453891  270029 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0601 11:21:01.701224  270029 out.go:204]   - Generating certificates and keys ...
	I0601 11:21:00.194872  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:21:02.195437  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:20:59.219067  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:21:01.219719  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:21:03.719181  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:21:02.294583  270029 out.go:204]   - Booting up control plane ...
	I0601 11:21:04.694423  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:21:06.695087  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:21:05.719516  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:21:07.719966  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:21:09.195174  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:21:11.694583  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:21:10.218984  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:21:12.219075  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:21:14.337355  270029 out.go:204]   - Configuring RBAC rules ...
	I0601 11:21:14.750718  270029 cni.go:95] Creating CNI manager for ""
	I0601 11:21:14.750741  270029 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0601 11:21:14.752905  270029 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0601 11:21:14.754285  270029 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0601 11:21:14.758047  270029 cni.go:189] applying CNI manifest using /var/lib/minikube/binaries/v1.23.6/kubectl ...
	I0601 11:21:14.758065  270029 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0601 11:21:14.771201  270029 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0601 11:21:15.434277  270029 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0601 11:21:15.434380  270029 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:21:15.434381  270029 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl label nodes minikube.k8s.io/version=v1.26.0-beta.1 minikube.k8s.io/commit=4a356b2b7b41c6be3e1e342298908c27bb98ce92 minikube.k8s.io/name=embed-certs-20220601110327-6708 minikube.k8s.io/updated_at=2022_06_01T11_21_15_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:21:15.489119  270029 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:21:15.489208  270029 ops.go:34] apiserver oom_adj: -16
	I0601 11:21:16.079192  270029 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:21:16.579319  270029 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:21:14.194681  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:21:16.694557  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:21:14.219440  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:21:16.719363  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:21:17.079349  270029 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:21:17.579548  270029 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:21:18.079683  270029 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:21:18.579186  270029 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:21:19.079819  270029 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:21:19.579346  270029 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:21:20.079183  270029 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:21:20.579984  270029 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:21:21.079335  270029 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:21:21.579766  270029 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:21:18.694796  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:21:21.194627  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:21:19.218867  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:21:21.219185  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:21:23.719814  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:21:22.079321  270029 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:21:22.579993  270029 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:21:23.079856  270029 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:21:23.579743  270029 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:21:24.079256  270029 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:21:24.579276  270029 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:21:25.079828  270029 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:21:25.579763  270029 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:21:26.080068  270029 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:21:26.579388  270029 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:21:23.694527  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:21:25.694996  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:21:27.079269  270029 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:21:27.579729  270029 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:21:27.636171  270029 kubeadm.go:1045] duration metric: took 12.201851278s to wait for elevateKubeSystemPrivileges.
	I0601 11:21:27.636205  270029 kubeadm.go:397] StartCluster complete in 4m43.646757592s
	I0601 11:21:27.636227  270029 settings.go:142] acquiring lock: {Name:mk20a847233fa50399ab0a24280bffb8d8dbd41b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0601 11:21:27.636334  270029 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig
	I0601 11:21:27.637880  270029 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig: {Name:mkb0e16236e54fbc8651999f1dd70854c53de7db Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0601 11:21:28.157076  270029 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "embed-certs-20220601110327-6708" rescaled to 1
	I0601 11:21:28.157150  270029 start.go:208] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0601 11:21:28.157180  270029 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0601 11:21:28.159818  270029 out.go:177] * Verifying Kubernetes components...
	I0601 11:21:28.157185  270029 addons.go:415] enableAddons start: toEnable=map[dashboard:true metrics-server:true], additional=[]
	I0601 11:21:28.157406  270029 config.go:178] Loaded profile config "embed-certs-20220601110327-6708": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.6
	I0601 11:21:28.161484  270029 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0601 11:21:28.161496  270029 addons.go:65] Setting metrics-server=true in profile "embed-certs-20220601110327-6708"
	I0601 11:21:28.161511  270029 addons.go:153] Setting addon metrics-server=true in "embed-certs-20220601110327-6708"
	W0601 11:21:28.161523  270029 addons.go:165] addon metrics-server should already be in state true
	I0601 11:21:28.161537  270029 addons.go:65] Setting dashboard=true in profile "embed-certs-20220601110327-6708"
	I0601 11:21:28.161566  270029 addons.go:153] Setting addon dashboard=true in "embed-certs-20220601110327-6708"
	I0601 11:21:28.161573  270029 host.go:66] Checking if "embed-certs-20220601110327-6708" exists ...
	W0601 11:21:28.161579  270029 addons.go:165] addon dashboard should already be in state true
	I0601 11:21:28.161483  270029 addons.go:65] Setting storage-provisioner=true in profile "embed-certs-20220601110327-6708"
	I0601 11:21:28.161622  270029 addons.go:153] Setting addon storage-provisioner=true in "embed-certs-20220601110327-6708"
	W0601 11:21:28.161631  270029 addons.go:165] addon storage-provisioner should already be in state true
	I0601 11:21:28.161636  270029 host.go:66] Checking if "embed-certs-20220601110327-6708" exists ...
	I0601 11:21:28.161669  270029 host.go:66] Checking if "embed-certs-20220601110327-6708" exists ...
	I0601 11:21:28.161500  270029 addons.go:65] Setting default-storageclass=true in profile "embed-certs-20220601110327-6708"
	I0601 11:21:28.161709  270029 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-20220601110327-6708"
	I0601 11:21:28.161949  270029 cli_runner.go:164] Run: docker container inspect embed-certs-20220601110327-6708 --format={{.State.Status}}
	I0601 11:21:28.162094  270029 cli_runner.go:164] Run: docker container inspect embed-certs-20220601110327-6708 --format={{.State.Status}}
	I0601 11:21:28.162123  270029 cli_runner.go:164] Run: docker container inspect embed-certs-20220601110327-6708 --format={{.State.Status}}
	I0601 11:21:28.162229  270029 cli_runner.go:164] Run: docker container inspect embed-certs-20220601110327-6708 --format={{.State.Status}}
	I0601 11:21:28.209663  270029 out.go:177]   - Using image k8s.gcr.io/echoserver:1.4
	I0601 11:21:28.211523  270029 out.go:177]   - Using image kubernetesui/dashboard:v2.5.1
	I0601 11:21:28.213009  270029 addons.go:348] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0601 11:21:28.213030  270029 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0601 11:21:28.213079  270029 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220601110327-6708
	I0601 11:21:28.216922  270029 out.go:177]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I0601 11:21:28.218989  270029 addons.go:348] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0601 11:21:28.217201  270029 addons.go:153] Setting addon default-storageclass=true in "embed-certs-20220601110327-6708"
	W0601 11:21:28.219035  270029 addons.go:165] addon default-storageclass should already be in state true
	I0601 11:21:28.219075  270029 host.go:66] Checking if "embed-certs-20220601110327-6708" exists ...
	I0601 11:21:28.219579  270029 cli_runner.go:164] Run: docker container inspect embed-certs-20220601110327-6708 --format={{.State.Status}}
	I0601 11:21:28.219012  270029 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0601 11:21:28.219781  270029 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220601110327-6708
	I0601 11:21:28.236451  270029 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0601 11:21:26.218905  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:21:28.219209  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:21:28.238138  270029 addons.go:348] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0601 11:21:28.238163  270029 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0601 11:21:28.238217  270029 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220601110327-6708
	I0601 11:21:28.246850  270029 node_ready.go:35] waiting up to 6m0s for node "embed-certs-20220601110327-6708" to be "Ready" ...
	I0601 11:21:28.246885  270029 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0601 11:21:28.273680  270029 addons.go:348] installing /etc/kubernetes/addons/storageclass.yaml
	I0601 11:21:28.273707  270029 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0601 11:21:28.273761  270029 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220601110327-6708
	I0601 11:21:28.278846  270029 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49437 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/embed-certs-20220601110327-6708/id_rsa Username:docker}
	I0601 11:21:28.279320  270029 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49437 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/embed-certs-20220601110327-6708/id_rsa Username:docker}
	I0601 11:21:28.286384  270029 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49437 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/embed-certs-20220601110327-6708/id_rsa Username:docker}
	I0601 11:21:28.321729  270029 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49437 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/embed-certs-20220601110327-6708/id_rsa Username:docker}
	I0601 11:21:28.455756  270029 addons.go:348] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0601 11:21:28.455785  270029 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1849 bytes)
	I0601 11:21:28.466348  270029 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0601 11:21:28.469026  270029 addons.go:348] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0601 11:21:28.469067  270029 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0601 11:21:28.469486  270029 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0601 11:21:28.478076  270029 addons.go:348] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0601 11:21:28.478099  270029 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0601 11:21:28.487008  270029 addons.go:348] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0601 11:21:28.487036  270029 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0601 11:21:28.573106  270029 addons.go:348] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0601 11:21:28.573135  270029 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0601 11:21:28.574698  270029 start.go:806] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS
	I0601 11:21:28.577019  270029 addons.go:348] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0601 11:21:28.577042  270029 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0601 11:21:28.653936  270029 addons.go:348] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0601 11:21:28.653967  270029 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4196 bytes)
	I0601 11:21:28.658482  270029 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0601 11:21:28.671762  270029 addons.go:348] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0601 11:21:28.671808  270029 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0601 11:21:28.758424  270029 addons.go:348] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0601 11:21:28.758516  270029 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0601 11:21:28.776703  270029 addons.go:348] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0601 11:21:28.776735  270029 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0601 11:21:28.794636  270029 addons.go:348] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0601 11:21:28.794670  270029 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0601 11:21:28.959418  270029 addons.go:348] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0601 11:21:28.959449  270029 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0601 11:21:28.976465  270029 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0601 11:21:29.354605  270029 addons.go:386] Verifying addon metrics-server=true in "embed-certs-20220601110327-6708"
	I0601 11:21:29.699561  270029 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server, dashboard
	I0601 11:21:29.700807  270029 addons.go:417] enableAddons completed in 1.543631535s
	I0601 11:21:30.260215  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:21:28.196140  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:21:30.694688  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:21:32.695236  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:21:30.219534  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:21:32.219685  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:21:32.260412  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:21:34.760173  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:21:36.760442  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:21:35.195034  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:21:37.195304  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:21:34.718805  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:21:36.719108  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:21:38.760533  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:21:40.761060  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:21:39.694703  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:21:42.195994  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:21:39.219402  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:21:41.718982  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:21:43.719227  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:21:43.259684  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:21:45.260363  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:21:45.719329  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:21:47.719480  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:21:47.721505  254820 node_ready.go:38] duration metric: took 4m0.008123732s waiting for node "old-k8s-version-20220601105850-6708" to be "Ready" ...
	I0601 11:21:47.723918  254820 out.go:177] 
	W0601 11:21:47.725406  254820 out.go:239] X Exiting due to GUEST_START: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: timed out waiting for the condition
	W0601 11:21:47.725423  254820 out.go:239] * 
	W0601 11:21:47.726098  254820 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0601 11:21:47.728001  254820 out.go:177] 
	I0601 11:21:44.695306  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:21:47.194624  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:21:47.760960  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:21:50.260784  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:21:49.195368  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:21:51.694946  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:21:52.760281  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:21:55.259912  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:21:54.194912  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:21:56.195652  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:21:57.259956  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:21:59.759755  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:22:01.759853  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:21:58.694995  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:22:01.194431  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:22:03.760721  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:22:06.260069  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:22:03.195297  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:22:05.694312  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:22:07.695082  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:22:08.260739  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:22:10.760237  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:22:10.194760  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:22:12.194885  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:22:13.259813  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:22:15.260153  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:22:14.195226  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:22:16.694528  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:22:17.260859  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:22:19.759997  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:22:21.760654  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:22:18.695235  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:22:21.194694  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:22:24.260433  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:22:26.760129  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:22:23.197530  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:22:25.695229  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:22:28.760717  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:22:31.260368  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:22:28.194771  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:22:30.195026  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:22:32.694696  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:22:33.760112  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:22:35.760758  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:22:34.694930  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:22:36.695375  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:22:38.260723  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:22:40.760393  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:22:39.194795  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:22:41.694750  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:22:43.259823  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:22:45.260551  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:22:44.195389  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:22:46.695489  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:22:47.760311  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:22:49.760404  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:22:49.194395  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:22:51.195245  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:22:52.260594  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:22:54.760044  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:22:56.760073  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:22:53.195327  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:22:55.694893  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:22:58.760157  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:23:01.260267  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:22:58.194547  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:23:00.694762  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:23:03.260561  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:23:05.260780  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:23:03.195176  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:23:05.694698  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:23:07.695208  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:23:07.760513  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:23:10.260326  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:23:10.195039  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:23:12.695240  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:23:12.260674  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:23:14.260918  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:23:16.760064  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:23:15.195155  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:23:17.195241  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:23:18.760686  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:23:21.260676  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:23:19.694620  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:23:21.694667  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:23:23.760024  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:23:26.259746  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:23:24.194510  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:23:26.194546  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:23:28.260714  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:23:30.760541  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:23:28.194917  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:23:30.694766  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:23:33.260035  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:23:35.261060  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:23:33.195328  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:23:35.694682  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:23:37.695340  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:23:37.760144  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:23:40.260334  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:23:40.194751  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:23:42.194853  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:23:42.759808  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:23:44.759997  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:23:46.760285  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:23:44.695010  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:23:46.695526  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:23:48.760374  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:23:51.260999  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:23:49.194307  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:23:51.195053  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:23:53.760587  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:23:56.260172  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:23:53.195339  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:23:55.695153  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:23:58.759799  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:24:00.760631  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:23:58.194738  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:24:00.195407  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:24:02.695048  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:24:03.260687  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:24:05.260722  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:24:04.695337  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:24:07.194665  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:24:07.760567  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:24:10.260596  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:24:09.195069  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:24:11.694328  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:24:12.260967  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:24:14.759793  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:24:16.760292  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:24:14.194996  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:24:16.694542  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:24:18.760531  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:24:20.760689  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:24:18.694668  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:24:20.695051  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:24:23.195952  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:24:24.691928  276679 pod_ready.go:81] duration metric: took 4m0.002724634s waiting for pod "coredns-64897985d-9gcj2" in "kube-system" namespace to be "Ready" ...
	E0601 11:24:24.691955  276679 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "coredns-64897985d-9gcj2" in "kube-system" namespace to be "Ready" (will not retry!)
	I0601 11:24:24.691981  276679 pod_ready.go:38] duration metric: took 4m0.008258762s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0601 11:24:24.692005  276679 kubeadm.go:630] restartCluster took 4m14.973349857s
	W0601 11:24:24.692130  276679 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0601 11:24:24.692159  276679 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I0601 11:24:26.286416  276679 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force": (1.594228976s)
	I0601 11:24:26.286489  276679 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0601 11:24:26.296314  276679 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0601 11:24:26.303059  276679 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0601 11:24:26.303116  276679 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0601 11:24:26.309917  276679 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0601 11:24:26.309957  276679 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0601 11:24:22.761011  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:24:25.261206  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:24:26.556270  276679 out.go:204]   - Generating certificates and keys ...
	I0601 11:24:27.302083  276679 out.go:204]   - Booting up control plane ...
	I0601 11:24:27.261441  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:24:29.759885  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:24:32.260145  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:24:34.260990  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:24:36.760710  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:24:38.840585  276679 out.go:204]   - Configuring RBAC rules ...
	I0601 11:24:39.253770  276679 cni.go:95] Creating CNI manager for ""
	I0601 11:24:39.253791  276679 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0601 11:24:39.255739  276679 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0601 11:24:39.259837  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:24:41.260124  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:24:39.257207  276679 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0601 11:24:39.261207  276679 cni.go:189] applying CNI manifest using /var/lib/minikube/binaries/v1.23.6/kubectl ...
	I0601 11:24:39.261228  276679 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0601 11:24:39.273744  276679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0601 11:24:39.861493  276679 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0601 11:24:39.861573  276679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:24:39.861574  276679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl label nodes minikube.k8s.io/version=v1.26.0-beta.1 minikube.k8s.io/commit=4a356b2b7b41c6be3e1e342298908c27bb98ce92 minikube.k8s.io/name=default-k8s-different-port-20220601110654-6708 minikube.k8s.io/updated_at=2022_06_01T11_24_39_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:24:39.914842  276679 ops.go:34] apiserver oom_adj: -16
	I0601 11:24:39.914913  276679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:24:40.498901  276679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:24:40.998931  276679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:24:41.499031  276679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:24:41.998593  276679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:24:42.499160  276679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:24:43.260473  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:24:45.760870  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:24:42.998966  276679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:24:43.498638  276679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:24:43.998319  276679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:24:44.498531  276679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:24:44.998678  276679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:24:45.499193  276679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:24:45.998418  276679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:24:46.498985  276679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:24:46.998941  276679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:24:47.498945  276679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:24:48.260450  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:24:50.260933  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:24:47.999272  276679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:24:48.498439  276679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:24:48.999292  276679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:24:49.499272  276679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:24:49.998339  276679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:24:50.498332  276679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:24:50.999106  276679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:24:51.499296  276679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:24:51.998980  276679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:24:52.498623  276679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:24:52.998371  276679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:24:53.498515  276679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:24:53.594790  276679 kubeadm.go:1045] duration metric: took 13.733266896s to wait for elevateKubeSystemPrivileges.
	I0601 11:24:53.594820  276679 kubeadm.go:397] StartCluster complete in 4m43.917251881s
	I0601 11:24:53.594841  276679 settings.go:142] acquiring lock: {Name:mk20a847233fa50399ab0a24280bffb8d8dbd41b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0601 11:24:53.594938  276679 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig
	I0601 11:24:53.596907  276679 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig: {Name:mkb0e16236e54fbc8651999f1dd70854c53de7db Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0601 11:24:54.111475  276679 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "default-k8s-different-port-20220601110654-6708" rescaled to 1
	I0601 11:24:54.111547  276679 start.go:208] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8444 KubernetesVersion:v1.23.6 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0601 11:24:54.113711  276679 out.go:177] * Verifying Kubernetes components...
	I0601 11:24:54.111604  276679 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0601 11:24:54.111644  276679 addons.go:415] enableAddons start: toEnable=map[dashboard:true metrics-server:true], additional=[]
	I0601 11:24:54.111802  276679 config.go:178] Loaded profile config "default-k8s-different-port-20220601110654-6708": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.6
	I0601 11:24:54.115020  276679 addons.go:65] Setting storage-provisioner=true in profile "default-k8s-different-port-20220601110654-6708"
	I0601 11:24:54.115035  276679 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0601 11:24:54.115035  276679 addons.go:65] Setting metrics-server=true in profile "default-k8s-different-port-20220601110654-6708"
	I0601 11:24:54.115048  276679 addons.go:153] Setting addon storage-provisioner=true in "default-k8s-different-port-20220601110654-6708"
	I0601 11:24:54.115055  276679 addons.go:153] Setting addon metrics-server=true in "default-k8s-different-port-20220601110654-6708"
	W0601 11:24:54.115057  276679 addons.go:165] addon storage-provisioner should already be in state true
	W0601 11:24:54.115064  276679 addons.go:165] addon metrics-server should already be in state true
	I0601 11:24:54.115034  276679 addons.go:65] Setting default-storageclass=true in profile "default-k8s-different-port-20220601110654-6708"
	I0601 11:24:54.115103  276679 host.go:66] Checking if "default-k8s-different-port-20220601110654-6708" exists ...
	I0601 11:24:54.115109  276679 host.go:66] Checking if "default-k8s-different-port-20220601110654-6708" exists ...
	I0601 11:24:54.115112  276679 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-different-port-20220601110654-6708"
	I0601 11:24:54.115037  276679 addons.go:65] Setting dashboard=true in profile "default-k8s-different-port-20220601110654-6708"
	I0601 11:24:54.115134  276679 addons.go:153] Setting addon dashboard=true in "default-k8s-different-port-20220601110654-6708"
	W0601 11:24:54.115144  276679 addons.go:165] addon dashboard should already be in state true
	I0601 11:24:54.115176  276679 host.go:66] Checking if "default-k8s-different-port-20220601110654-6708" exists ...
	I0601 11:24:54.115416  276679 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220601110654-6708 --format={{.State.Status}}
	I0601 11:24:54.115596  276679 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220601110654-6708 --format={{.State.Status}}
	I0601 11:24:54.115611  276679 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220601110654-6708 --format={{.State.Status}}
	I0601 11:24:54.115615  276679 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220601110654-6708 --format={{.State.Status}}
	I0601 11:24:54.129176  276679 node_ready.go:35] waiting up to 6m0s for node "default-k8s-different-port-20220601110654-6708" to be "Ready" ...
	I0601 11:24:54.168194  276679 out.go:177]   - Using image kubernetesui/dashboard:v2.5.1
	I0601 11:24:54.169714  276679 out.go:177]   - Using image k8s.gcr.io/echoserver:1.4
	I0601 11:24:54.171144  276679 addons.go:348] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0601 11:24:54.170891  276679 addons.go:153] Setting addon default-storageclass=true in "default-k8s-different-port-20220601110654-6708"
	W0601 11:24:54.171181  276679 addons.go:165] addon default-storageclass should already be in state true
	I0601 11:24:54.171211  276679 host.go:66] Checking if "default-k8s-different-port-20220601110654-6708" exists ...
	I0601 11:24:54.171167  276679 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0601 11:24:54.171329  276679 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220601110654-6708
	I0601 11:24:54.171684  276679 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220601110654-6708 --format={{.State.Status}}
	I0601 11:24:54.176157  276679 out.go:177]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I0601 11:24:54.177770  276679 addons.go:348] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0601 11:24:54.177796  276679 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0601 11:24:54.179131  276679 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0601 11:24:54.177859  276679 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220601110654-6708
	I0601 11:24:54.180787  276679 addons.go:348] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0601 11:24:54.180809  276679 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0601 11:24:54.180855  276679 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220601110654-6708
	I0601 11:24:54.233206  276679 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49442 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/default-k8s-different-port-20220601110654-6708/id_rsa Username:docker}
	I0601 11:24:54.240234  276679 addons.go:348] installing /etc/kubernetes/addons/storageclass.yaml
	I0601 11:24:54.240263  276679 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0601 11:24:54.240311  276679 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220601110654-6708
	I0601 11:24:54.240743  276679 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49442 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/default-k8s-different-port-20220601110654-6708/id_rsa Username:docker}
	I0601 11:24:54.242497  276679 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49442 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/default-k8s-different-port-20220601110654-6708/id_rsa Username:docker}
	I0601 11:24:54.255476  276679 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0601 11:24:54.289597  276679 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49442 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/default-k8s-different-port-20220601110654-6708/id_rsa Username:docker}
	I0601 11:24:54.510589  276679 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0601 11:24:54.510747  276679 addons.go:348] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0601 11:24:54.510770  276679 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1849 bytes)
	I0601 11:24:54.556919  276679 addons.go:348] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0601 11:24:54.556950  276679 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0601 11:24:54.566012  276679 addons.go:348] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0601 11:24:54.566042  276679 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0601 11:24:54.569528  276679 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0601 11:24:54.576575  276679 addons.go:348] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0601 11:24:54.576625  276679 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0601 11:24:54.654525  276679 addons.go:348] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0601 11:24:54.654551  276679 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0601 11:24:54.655296  276679 addons.go:348] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0601 11:24:54.655319  276679 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0601 11:24:54.661290  276679 start.go:806] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS
	I0601 11:24:54.671592  276679 addons.go:348] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0601 11:24:54.671621  276679 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4196 bytes)
	I0601 11:24:54.673696  276679 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0601 11:24:54.687107  276679 addons.go:348] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0601 11:24:54.687133  276679 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0601 11:24:54.768961  276679 addons.go:348] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0601 11:24:54.768989  276679 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0601 11:24:54.854363  276679 addons.go:348] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0601 11:24:54.854399  276679 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0601 11:24:54.870735  276679 addons.go:348] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0601 11:24:54.870762  276679 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0601 11:24:54.888031  276679 addons.go:348] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0601 11:24:54.888063  276679 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0601 11:24:54.967082  276679 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0601 11:24:55.273650  276679 addons.go:386] Verifying addon metrics-server=true in "default-k8s-different-port-20220601110654-6708"
	I0601 11:24:55.661065  276679 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I0601 11:24:52.261071  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:24:54.261578  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:24:56.760078  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:24:55.662561  276679 addons.go:417] enableAddons completed in 1.550935677s
	I0601 11:24:56.136034  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:24:58.760245  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:25:00.760344  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:24:58.136131  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:25:00.136759  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:25:02.636409  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:25:03.260144  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:25:05.260531  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:25:05.136779  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:25:07.635969  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:25:07.760027  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:25:09.760904  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:25:10.136336  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:25:12.636564  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:25:12.260100  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:25:14.759992  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:25:16.760260  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:25:14.636694  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:25:17.137058  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:25:19.260136  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:25:21.260700  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:25:19.636331  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:25:22.136010  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:25:23.760875  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:25:26.261082  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:25:24.136501  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:25:26.636646  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:25:28.263320  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:25:28.263343  270029 node_ready.go:38] duration metric: took 4m0.016466534s waiting for node "embed-certs-20220601110327-6708" to be "Ready" ...
	I0601 11:25:28.265930  270029 out.go:177] 
	W0601 11:25:28.267524  270029 out.go:239] X Exiting due to GUEST_START: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: timed out waiting for the condition
	W0601 11:25:28.267549  270029 out.go:239] * 
	W0601 11:25:28.268404  270029 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0601 11:25:28.269962  270029 out.go:177] 
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID
	b948d023f8980       6de166512aa22       About a minute ago   Running             kindnet-cni               1                   5034272feeb28
	847a11a10e8fe       6de166512aa22       4 minutes ago        Exited              kindnet-cni               0                   5034272feeb28
	2024cc29941ea       4c03754524064       4 minutes ago        Running             kube-proxy                0                   daf24f5fe6815
	66ae64154eec2       595f327f224a4       4 minutes ago        Running             kube-scheduler            2                   2407fda9d1316
	6a41e96934391       25f8c7f3da61c       4 minutes ago        Running             etcd                      2                   99351c41f0535
	886985a42629e       8fa62c12256df       4 minutes ago        Running             kube-apiserver            2                   0116dd4e67c47
	419ab1e52af79       df7b72818ad2e       4 minutes ago        Running             kube-controller-manager   2                   2380a5b9d67cf
	
	* 
	* ==> containerd <==
	* -- Logs begin at Wed 2022-06-01 11:16:28 UTC, end at Wed 2022-06-01 11:25:29 UTC. --
	Jun 01 11:21:27 embed-certs-20220601110327-6708 containerd[390]: time="2022-06-01T11:21:27.739187944Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 01 11:21:27 embed-certs-20220601110327-6708 containerd[390]: time="2022-06-01T11:21:27.739197514Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 01 11:21:27 embed-certs-20220601110327-6708 containerd[390]: time="2022-06-01T11:21:27.739381346Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/daf24f5fe68158c3863dbf0f3f9c3dc4abcc687d43e39ef17ead8f33d9c8cea4 pid=3291 runtime=io.containerd.runc.v2
	Jun 01 11:21:27 embed-certs-20220601110327-6708 containerd[390]: time="2022-06-01T11:21:27.799181940Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-tssbf,Uid:d9b1d343-b5b6-4222-860a-0c82565d26d0,Namespace:kube-system,Attempt:0,} returns sandbox id \"daf24f5fe68158c3863dbf0f3f9c3dc4abcc687d43e39ef17ead8f33d9c8cea4\""
	Jun 01 11:21:27 embed-certs-20220601110327-6708 containerd[390]: time="2022-06-01T11:21:27.801847878Z" level=info msg="CreateContainer within sandbox \"daf24f5fe68158c3863dbf0f3f9c3dc4abcc687d43e39ef17ead8f33d9c8cea4\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}"
	Jun 01 11:21:27 embed-certs-20220601110327-6708 containerd[390]: time="2022-06-01T11:21:27.814026788Z" level=info msg="CreateContainer within sandbox \"daf24f5fe68158c3863dbf0f3f9c3dc4abcc687d43e39ef17ead8f33d9c8cea4\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"2024cc29941eac56c99e05d765da4cccd7a64faa03b756a89fe50b23fa6e8a56\""
	Jun 01 11:21:27 embed-certs-20220601110327-6708 containerd[390]: time="2022-06-01T11:21:27.814475789Z" level=info msg="StartContainer for \"2024cc29941eac56c99e05d765da4cccd7a64faa03b756a89fe50b23fa6e8a56\""
	Jun 01 11:21:27 embed-certs-20220601110327-6708 containerd[390]: time="2022-06-01T11:21:27.873113112Z" level=info msg="StartContainer for \"2024cc29941eac56c99e05d765da4cccd7a64faa03b756a89fe50b23fa6e8a56\" returns successfully"
	Jun 01 11:21:28 embed-certs-20220601110327-6708 containerd[390]: time="2022-06-01T11:21:28.054853106Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kindnet-xnhg5,Uid:77485095-9b6b-4682-b7c7-f5a313137d9f,Namespace:kube-system,Attempt:0,} returns sandbox id \"5034272feeb28ea173b9daa7ead31b2fb82af31b8ab6deaeb6c410cb9ac82b6f\""
	Jun 01 11:21:28 embed-certs-20220601110327-6708 containerd[390]: time="2022-06-01T11:21:28.057614125Z" level=info msg="CreateContainer within sandbox \"5034272feeb28ea173b9daa7ead31b2fb82af31b8ab6deaeb6c410cb9ac82b6f\" for container &ContainerMetadata{Name:kindnet-cni,Attempt:0,}"
	Jun 01 11:21:28 embed-certs-20220601110327-6708 containerd[390]: time="2022-06-01T11:21:28.069042263Z" level=info msg="CreateContainer within sandbox \"5034272feeb28ea173b9daa7ead31b2fb82af31b8ab6deaeb6c410cb9ac82b6f\" for &ContainerMetadata{Name:kindnet-cni,Attempt:0,} returns container id \"847a11a10e8fea029dd23fac48e064c61a972e94ec5d262ecd609d1320b886cc\""
	Jun 01 11:21:28 embed-certs-20220601110327-6708 containerd[390]: time="2022-06-01T11:21:28.069489362Z" level=info msg="StartContainer for \"847a11a10e8fea029dd23fac48e064c61a972e94ec5d262ecd609d1320b886cc\""
	Jun 01 11:21:28 embed-certs-20220601110327-6708 containerd[390]: time="2022-06-01T11:21:28.282016609Z" level=info msg="StartContainer for \"847a11a10e8fea029dd23fac48e064c61a972e94ec5d262ecd609d1320b886cc\" returns successfully"
	Jun 01 11:22:19 embed-certs-20220601110327-6708 containerd[390]: time="2022-06-01T11:22:19.629149173Z" level=error msg="ContainerStatus for \"aa08679c8c54bd34a1427c3bd8b9b2eee01105b938529d57670f44ac981227c2\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"aa08679c8c54bd34a1427c3bd8b9b2eee01105b938529d57670f44ac981227c2\": not found"
	Jun 01 11:22:19 embed-certs-20220601110327-6708 containerd[390]: time="2022-06-01T11:22:19.629882349Z" level=error msg="ContainerStatus for \"f86a1f4ab8fa6e59950a41f46d01e54d995858f89a659fa6afeb87be1d48a1dd\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f86a1f4ab8fa6e59950a41f46d01e54d995858f89a659fa6afeb87be1d48a1dd\": not found"
	Jun 01 11:22:19 embed-certs-20220601110327-6708 containerd[390]: time="2022-06-01T11:22:19.630423560Z" level=error msg="ContainerStatus for \"4a8be0c7cfc5357e854ba7e1f8d7f90fdbacf08e294eb70b849c4501eb4b32b6\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"4a8be0c7cfc5357e854ba7e1f8d7f90fdbacf08e294eb70b849c4501eb4b32b6\": not found"
	Jun 01 11:22:19 embed-certs-20220601110327-6708 containerd[390]: time="2022-06-01T11:22:19.630960733Z" level=error msg="ContainerStatus for \"a502c0c61144997404cc8698dac19697f8277fa6b9f390fe4fffa2f075def355\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a502c0c61144997404cc8698dac19697f8277fa6b9f390fe4fffa2f075def355\": not found"
	Jun 01 11:24:08 embed-certs-20220601110327-6708 containerd[390]: time="2022-06-01T11:24:08.585009331Z" level=info msg="shim disconnected" id=847a11a10e8fea029dd23fac48e064c61a972e94ec5d262ecd609d1320b886cc
	Jun 01 11:24:08 embed-certs-20220601110327-6708 containerd[390]: time="2022-06-01T11:24:08.585075078Z" level=warning msg="cleaning up after shim disconnected" id=847a11a10e8fea029dd23fac48e064c61a972e94ec5d262ecd609d1320b886cc namespace=k8s.io
	Jun 01 11:24:08 embed-certs-20220601110327-6708 containerd[390]: time="2022-06-01T11:24:08.585094616Z" level=info msg="cleaning up dead shim"
	Jun 01 11:24:08 embed-certs-20220601110327-6708 containerd[390]: time="2022-06-01T11:24:08.594103599Z" level=warning msg="cleanup warnings time=\"2022-06-01T11:24:08Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3768 runtime=io.containerd.runc.v2\n"
	Jun 01 11:24:09 embed-certs-20220601110327-6708 containerd[390]: time="2022-06-01T11:24:09.070545919Z" level=info msg="CreateContainer within sandbox \"5034272feeb28ea173b9daa7ead31b2fb82af31b8ab6deaeb6c410cb9ac82b6f\" for container &ContainerMetadata{Name:kindnet-cni,Attempt:1,}"
	Jun 01 11:24:09 embed-certs-20220601110327-6708 containerd[390]: time="2022-06-01T11:24:09.081480880Z" level=info msg="CreateContainer within sandbox \"5034272feeb28ea173b9daa7ead31b2fb82af31b8ab6deaeb6c410cb9ac82b6f\" for &ContainerMetadata{Name:kindnet-cni,Attempt:1,} returns container id \"b948d023f8980c480b53fdabb3e706e5a9dd36d3ec3306923b50ef0f4f9e9a40\""
	Jun 01 11:24:09 embed-certs-20220601110327-6708 containerd[390]: time="2022-06-01T11:24:09.082560011Z" level=info msg="StartContainer for \"b948d023f8980c480b53fdabb3e706e5a9dd36d3ec3306923b50ef0f4f9e9a40\""
	Jun 01 11:24:09 embed-certs-20220601110327-6708 containerd[390]: time="2022-06-01T11:24:09.257544964Z" level=info msg="StartContainer for \"b948d023f8980c480b53fdabb3e706e5a9dd36d3ec3306923b50ef0f4f9e9a40\" returns successfully"
	
	* 
	* ==> describe nodes <==
	* Name:               embed-certs-20220601110327-6708
	Roles:              control-plane,master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-20220601110327-6708
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=4a356b2b7b41c6be3e1e342298908c27bb98ce92
	                    minikube.k8s.io/name=embed-certs-20220601110327-6708
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2022_06_01T11_21_15_0700
	                    minikube.k8s.io/version=v1.26.0-beta.1
	                    node-role.kubernetes.io/control-plane=
	                    node-role.kubernetes.io/master=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 01 Jun 2022 11:21:11 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-20220601110327-6708
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 01 Jun 2022 11:25:24 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 01 Jun 2022 11:21:26 +0000   Wed, 01 Jun 2022 11:21:09 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 01 Jun 2022 11:21:26 +0000   Wed, 01 Jun 2022 11:21:09 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 01 Jun 2022 11:21:26 +0000   Wed, 01 Jun 2022 11:21:09 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Wed, 01 Jun 2022 11:21:26 +0000   Wed, 01 Jun 2022 11:21:09 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    embed-certs-20220601110327-6708
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304695084Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32873824Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304695084Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32873824Ki
	  pods:               110
	System Info:
	  Machine ID:                 e0d7477b601740b2a7c32c13851e505c
	  System UUID:                d600b159-ea34-4ea3-ab62-e86c595f06ef
	  Boot ID:                    6ec46557-2d76-4d83-8353-3ee04fd961c4
	  Kernel Version:             5.13.0-1027-gcp
	  OS Image:                   Ubuntu 20.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.6.4
	  Kubelet Version:            v1.23.6
	  Kube-Proxy Version:         v1.23.6
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                       ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-embed-certs-20220601110327-6708                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (0%!)(MISSING)       0 (0%!)(MISSING)         4m9s
	  kube-system                 kindnet-xnhg5                                              100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      4m2s
	  kube-system                 kube-apiserver-embed-certs-20220601110327-6708             250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m9s
	  kube-system                 kube-controller-manager-embed-certs-20220601110327-6708    200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m9s
	  kube-system                 kube-proxy-tssbf                                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m2s
	  kube-system                 kube-scheduler-embed-certs-20220601110327-6708             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m9s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%!)(MISSING)   100m (1%!)(MISSING)
	  memory             150Mi (0%!)(MISSING)  50Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From        Message
	  ----    ------                   ----   ----        -------
	  Normal  Starting                 4m1s   kube-proxy  
	  Normal  Starting                 4m10s  kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m10s  kubelet     Node embed-certs-20220601110327-6708 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m10s  kubelet     Node embed-certs-20220601110327-6708 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m10s  kubelet     Node embed-certs-20220601110327-6708 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m10s  kubelet     Updated Node Allocatable limit across pods
	
	* 
	* ==> dmesg <==
	* [Jun 1 10:57] IPv4: martian source 10.244.0.2 from 10.244.0.2, on dev bridge
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff ce 23 50 3b c9 fd 08 06
	[  +0.595787] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ce 23 50 3b c9 fd 08 06
	[  +0.586201] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 66 8a d2 ee 2a 9a 08 06
	[Jun 1 10:58] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 9a c7 83 3b 8c 1d 08 06
	[  +0.000302] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 66 8a d2 ee 2a 9a 08 06
	[ +12.725641] IPv4: martian source 10.244.0.2 from 10.244.0.2, on dev bridge
	[  +0.000013] ll header: 00000000: ff ff ff ff ff ff 3a 82 da d0 8c 8d 08 06
	[  +0.906380] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 3a 82 da d0 8c 8d 08 06
	[  +0.302806] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff aa d1 05 1a ef 66 08 06
	[ +13.606547] IPv4: martian source 10.85.0.2 from 10.85.0.2, on dev cni0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 7e ad f3 21 27 5b 08 06
	[  +8.626375] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff f6 c1 ef be 01 dd 08 06
	[  +0.000348] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff aa d1 05 1a ef 66 08 06
	[  +8.278909] IPv4: martian source 10.85.0.2 from 10.85.0.2, on dev cni0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 72 fb cd eb 09 11 08 06
	[Jun 1 11:01] IPv4: martian destination 127.0.0.11 from 10.244.0.2, dev vethb04b9442
	
	* 
	* ==> etcd [6a41e969343918f9600ad1703d19a3220b0d2c2fb0c45c8588a9d65792ba9163] <==
	* {"level":"info","ts":"2022-06-01T11:21:08.675Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 switched to configuration voters=(16896983918768216326)"}
	{"level":"info","ts":"2022-06-01T11:21:08.675Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","added-peer-id":"ea7e25599daad906","added-peer-peer-urls":["https://192.168.76.2:2380"]}
	{"level":"info","ts":"2022-06-01T11:21:08.678Z","caller":"embed/etcd.go:687","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2022-06-01T11:21:08.678Z","caller":"embed/etcd.go:580","msg":"serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2022-06-01T11:21:08.678Z","caller":"embed/etcd.go:552","msg":"cmux::serve","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2022-06-01T11:21:08.678Z","caller":"embed/etcd.go:276","msg":"now serving peer/client/metrics","local-member-id":"ea7e25599daad906","initial-advertise-peer-urls":["https://192.168.76.2:2380"],"listen-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.76.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2022-06-01T11:21:08.678Z","caller":"embed/etcd.go:762","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2022-06-01T11:21:09.166Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 is starting a new election at term 1"}
	{"level":"info","ts":"2022-06-01T11:21:09.166Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became pre-candidate at term 1"}
	{"level":"info","ts":"2022-06-01T11:21:09.166Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 1"}
	{"level":"info","ts":"2022-06-01T11:21:09.166Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became candidate at term 2"}
	{"level":"info","ts":"2022-06-01T11:21:09.166Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2022-06-01T11:21:09.166Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became leader at term 2"}
	{"level":"info","ts":"2022-06-01T11:21:09.166Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2022-06-01T11:21:09.167Z","caller":"etcdserver/server.go:2476","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2022-06-01T11:21:09.168Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","cluster-version":"3.5"}
	{"level":"info","ts":"2022-06-01T11:21:09.168Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2022-06-01T11:21:09.168Z","caller":"etcdserver/server.go:2500","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2022-06-01T11:21:09.168Z","caller":"etcdserver/server.go:2027","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:embed-certs-20220601110327-6708 ClientURLs:[https://192.168.76.2:2379]}","request-path":"/0/members/ea7e25599daad906/attributes","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2022-06-01T11:21:09.168Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-06-01T11:21:09.168Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-06-01T11:21:09.168Z","caller":"etcdmain/main.go:47","msg":"notifying init daemon"}
	{"level":"info","ts":"2022-06-01T11:21:09.168Z","caller":"etcdmain/main.go:53","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2022-06-01T11:21:09.169Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2022-06-01T11:21:09.169Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.76.2:2379"}
	
	* 
	* ==> kernel <==
	*  11:25:29 up  1:08,  0 users,  load average: 0.51, 1.39, 1.81
	Linux embed-certs-20220601110327-6708 5.13.0-1027-gcp #32~20.04.1-Ubuntu SMP Thu May 26 10:53:08 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.4 LTS"
	
	* 
	* ==> kube-apiserver [886985a42629e2f2581e6d58eba9be2a3e9a0976e634d67f6342d3695a07e331] <==
	* E0601 11:21:14.466400       1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"http: Handler timeout"}: http: Handler timeout
	E0601 11:21:14.467554       1 writers.go:130] apiserver was unable to write a fallback JSON response: http: Handler timeout
	E0601 11:21:14.468782       1 timeout.go:141] post-timeout activity - time-elapsed: 3.573346ms, POST "/api/v1/namespaces/kube-system/pods" result: <nil>
	I0601 11:21:14.542614       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0601 11:21:14.549331       1 alloc.go:329] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I0601 11:21:14.558556       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0601 11:21:19.670838       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0601 11:21:27.387824       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	I0601 11:21:27.437896       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	I0601 11:21:27.932468       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	I0601 11:21:29.294541       1 alloc.go:329] "allocated clusterIPs" service="kube-system/metrics-server" clusterIPs=map[IPv4:10.105.182.95]
	I0601 11:21:29.683180       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs=map[IPv4:10.98.180.180]
	I0601 11:21:29.691708       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs=map[IPv4:10.101.21.27]
	W0601 11:21:30.176086       1 handler_proxy.go:104] no RequestInfo found in the context
	E0601 11:21:30.176180       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0601 11:21:30.176194       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0601 11:22:30.176579       1 handler_proxy.go:104] no RequestInfo found in the context
	E0601 11:22:30.176634       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0601 11:22:30.176640       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0601 11:24:30.176839       1 handler_proxy.go:104] no RequestInfo found in the context
	E0601 11:24:30.176936       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0601 11:24:30.176959       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	* 
	* ==> kube-controller-manager [419ab1e52af79f5d31cce5a9b20223a30a371546b4870858a9ea585daadb8873] <==
	* E0601 11:21:29.486396       1 replica_set.go:536] sync "kubernetes-dashboard/kubernetes-dashboard-8469778f77" failed with pods "kubernetes-dashboard-8469778f77-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0601 11:21:29.494720       1 replica_set.go:536] sync "kubernetes-dashboard/dashboard-metrics-scraper-56974995fc" failed with pods "dashboard-metrics-scraper-56974995fc-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0601 11:21:29.494783       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-56974995fc" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-56974995fc-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0601 11:21:29.561875       1 replica_set.go:536] sync "kubernetes-dashboard/dashboard-metrics-scraper-56974995fc" failed with pods "dashboard-metrics-scraper-56974995fc-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0601 11:21:29.561942       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-56974995fc" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-56974995fc-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0601 11:21:29.567352       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8469778f77" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-8469778f77-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0601 11:21:29.567355       1 replica_set.go:536] sync "kubernetes-dashboard/kubernetes-dashboard-8469778f77" failed with pods "kubernetes-dashboard-8469778f77-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0601 11:21:29.573727       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8469778f77" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-8469778f77-q4zvb"
	I0601 11:21:29.660694       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-56974995fc" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-56974995fc-xlrsl"
	E0601 11:21:56.856882       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0601 11:21:57.275577       1 garbagecollector.go:707] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0601 11:22:26.875584       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0601 11:22:27.288937       1 garbagecollector.go:707] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0601 11:22:56.894205       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0601 11:22:57.301550       1 garbagecollector.go:707] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0601 11:23:26.910332       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0601 11:23:27.315548       1 garbagecollector.go:707] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0601 11:23:56.927018       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0601 11:23:57.330827       1 garbagecollector.go:707] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0601 11:24:26.941671       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0601 11:24:27.348795       1 garbagecollector.go:707] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0601 11:24:56.955676       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0601 11:24:57.363603       1 garbagecollector.go:707] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0601 11:25:26.968560       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0601 11:25:27.380711       1 garbagecollector.go:707] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	
	* 
	* ==> kube-proxy [2024cc29941eac56c99e05d765da4cccd7a64faa03b756a89fe50b23fa6e8a56] <==
	* I0601 11:21:27.908559       1 node.go:163] Successfully retrieved node IP: 192.168.76.2
	I0601 11:21:27.908665       1 server_others.go:138] "Detected node IP" address="192.168.76.2"
	I0601 11:21:27.908723       1 server_others.go:561] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0601 11:21:27.929530       1 server_others.go:206] "Using iptables Proxier"
	I0601 11:21:27.929554       1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0601 11:21:27.929561       1 server_others.go:214] "Creating dualStackProxier for iptables"
	I0601 11:21:27.929580       1 server_others.go:491] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
	I0601 11:21:27.929998       1 server.go:656] "Version info" version="v1.23.6"
	I0601 11:21:27.930587       1 config.go:226] "Starting endpoint slice config controller"
	I0601 11:21:27.930621       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0601 11:21:27.930645       1 config.go:317] "Starting service config controller"
	I0601 11:21:27.930649       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0601 11:21:28.031260       1 shared_informer.go:247] Caches are synced for service config 
	I0601 11:21:28.031267       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	
	* 
	* ==> kube-scheduler [66ae64154eec2d7b3c29d2dfeddf5ba2852497cdce5a0c800571ccb6a8d41a89] <==
	* W0601 11:21:11.758158       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0601 11:21:11.758176       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0601 11:21:11.758158       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0601 11:21:11.758197       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0601 11:21:11.758221       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0601 11:21:11.758233       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0601 11:21:11.758422       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0601 11:21:11.758446       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0601 11:21:11.758510       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0601 11:21:11.758527       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0601 11:21:11.758633       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0601 11:21:11.758649       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0601 11:21:12.596556       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0601 11:21:12.596630       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0601 11:21:12.605956       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0601 11:21:12.606023       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0601 11:21:12.628052       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0601 11:21:12.628092       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0601 11:21:12.632884       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0601 11:21:12.632924       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0601 11:21:12.665044       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0601 11:21:12.665106       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0601 11:21:12.777999       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0601 11:21:12.778030       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0601 11:21:13.283228       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Wed 2022-06-01 11:16:28 UTC, end at Wed 2022-06-01 11:25:29 UTC. --
	Jun 01 11:23:29 embed-certs-20220601110327-6708 kubelet[2883]: E0601 11:23:29.895720    2883 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jun 01 11:23:34 embed-certs-20220601110327-6708 kubelet[2883]: E0601 11:23:34.897278    2883 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jun 01 11:23:39 embed-certs-20220601110327-6708 kubelet[2883]: E0601 11:23:39.898885    2883 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jun 01 11:23:44 embed-certs-20220601110327-6708 kubelet[2883]: E0601 11:23:44.899594    2883 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jun 01 11:23:49 embed-certs-20220601110327-6708 kubelet[2883]: E0601 11:23:49.900716    2883 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jun 01 11:23:54 embed-certs-20220601110327-6708 kubelet[2883]: E0601 11:23:54.901949    2883 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jun 01 11:23:59 embed-certs-20220601110327-6708 kubelet[2883]: E0601 11:23:59.902890    2883 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jun 01 11:24:04 embed-certs-20220601110327-6708 kubelet[2883]: E0601 11:24:04.904124    2883 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jun 01 11:24:09 embed-certs-20220601110327-6708 kubelet[2883]: I0601 11:24:09.068322    2883 scope.go:110] "RemoveContainer" containerID="847a11a10e8fea029dd23fac48e064c61a972e94ec5d262ecd609d1320b886cc"
	Jun 01 11:24:09 embed-certs-20220601110327-6708 kubelet[2883]: E0601 11:24:09.905154    2883 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jun 01 11:24:14 embed-certs-20220601110327-6708 kubelet[2883]: E0601 11:24:14.906055    2883 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jun 01 11:24:19 embed-certs-20220601110327-6708 kubelet[2883]: E0601 11:24:19.907427    2883 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jun 01 11:24:24 embed-certs-20220601110327-6708 kubelet[2883]: E0601 11:24:24.908241    2883 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jun 01 11:24:29 embed-certs-20220601110327-6708 kubelet[2883]: E0601 11:24:29.909616    2883 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jun 01 11:24:34 embed-certs-20220601110327-6708 kubelet[2883]: E0601 11:24:34.910688    2883 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jun 01 11:24:39 embed-certs-20220601110327-6708 kubelet[2883]: E0601 11:24:39.911601    2883 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jun 01 11:24:44 embed-certs-20220601110327-6708 kubelet[2883]: E0601 11:24:44.913251    2883 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jun 01 11:24:49 embed-certs-20220601110327-6708 kubelet[2883]: E0601 11:24:49.914911    2883 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jun 01 11:24:54 embed-certs-20220601110327-6708 kubelet[2883]: E0601 11:24:54.916206    2883 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jun 01 11:24:59 embed-certs-20220601110327-6708 kubelet[2883]: E0601 11:24:59.917744    2883 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jun 01 11:25:04 embed-certs-20220601110327-6708 kubelet[2883]: E0601 11:25:04.919039    2883 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jun 01 11:25:09 embed-certs-20220601110327-6708 kubelet[2883]: E0601 11:25:09.920405    2883 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jun 01 11:25:14 embed-certs-20220601110327-6708 kubelet[2883]: E0601 11:25:14.921771    2883 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jun 01 11:25:19 embed-certs-20220601110327-6708 kubelet[2883]: E0601 11:25:19.923404    2883 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jun 01 11:25:24 embed-certs-20220601110327-6708 kubelet[2883]: E0601 11:25:24.924520    2883 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-20220601110327-6708 -n embed-certs-20220601110327-6708
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-20220601110327-6708 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:270: non-running pods: coredns-64897985d-jsmdw metrics-server-b955d9d8-rw5ds storage-provisioner dashboard-metrics-scraper-56974995fc-xlrsl kubernetes-dashboard-8469778f77-q4zvb
helpers_test.go:272: ======> post-mortem[TestStartStop/group/embed-certs/serial/SecondStart]: describe non-running pods <======
helpers_test.go:275: (dbg) Run:  kubectl --context embed-certs-20220601110327-6708 describe pod coredns-64897985d-jsmdw metrics-server-b955d9d8-rw5ds storage-provisioner dashboard-metrics-scraper-56974995fc-xlrsl kubernetes-dashboard-8469778f77-q4zvb
helpers_test.go:275: (dbg) Non-zero exit: kubectl --context embed-certs-20220601110327-6708 describe pod coredns-64897985d-jsmdw metrics-server-b955d9d8-rw5ds storage-provisioner dashboard-metrics-scraper-56974995fc-xlrsl kubernetes-dashboard-8469778f77-q4zvb: exit status 1 (59.426129ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-64897985d-jsmdw" not found
	Error from server (NotFound): pods "metrics-server-b955d9d8-rw5ds" not found
	Error from server (NotFound): pods "storage-provisioner" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-56974995fc-xlrsl" not found
	Error from server (NotFound): pods "kubernetes-dashboard-8469778f77-q4zvb" not found

                                                
                                                
** /stderr **
helpers_test.go:277: kubectl --context embed-certs-20220601110327-6708 describe pod coredns-64897985d-jsmdw metrics-server-b955d9d8-rw5ds storage-provisioner dashboard-metrics-scraper-56974995fc-xlrsl kubernetes-dashboard-8469778f77-q4zvb: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/SecondStart (543.25s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/SecondStart (543.35s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/SecondStart
start_stop_delete_test.go:258: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-different-port-20220601110654-6708 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.23.6
E0601 11:19:57.950100    6708 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/bridge-20220601104837-6708/client.crt: no such file or directory
E0601 11:20:31.163910    6708 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/calico-20220601104839-6708/client.crt: no such file or directory
E0601 11:20:40.380391    6708 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/no-preload-20220601105939-6708/client.crt: no such file or directory
E0601 11:20:45.080338    6708 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/cilium-20220601104839-6708/client.crt: no such file or directory
E0601 11:21:21.551769    6708 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/kindnet-20220601104838-6708/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/SecondStart
start_stop_delete_test.go:258: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p default-k8s-different-port-20220601110654-6708 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.23.6: exit status 80 (9m1.382027096s)

                                                
                                                
-- stdout --
	* [default-k8s-different-port-20220601110654-6708] minikube v1.26.0-beta.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=14079
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	* Using the docker driver based on existing profile
	* Starting control plane node default-k8s-different-port-20220601110654-6708 in cluster default-k8s-different-port-20220601110654-6708
	* Pulling base image ...
	* Restarting existing docker container for "default-k8s-different-port-20220601110654-6708" ...
	* Preparing Kubernetes v1.23.6 on containerd 1.6.4 ...
	  - kubelet.cni-conf-dir=/etc/cni/net.mk
	* Configuring CNI (Container Networking Interface) ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image kubernetesui/dashboard:v2.5.1
	  - Using image k8s.gcr.io/echoserver:1.4
	  - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0601 11:19:52.827023  276679 out.go:296] Setting OutFile to fd 1 ...
	I0601 11:19:52.827225  276679 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0601 11:19:52.827237  276679 out.go:309] Setting ErrFile to fd 2...
	I0601 11:19:52.827242  276679 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0601 11:19:52.827359  276679 root.go:322] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/bin
	I0601 11:19:52.827588  276679 out.go:303] Setting JSON to false
	I0601 11:19:52.828890  276679 start.go:115] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":3747,"bootTime":1654078646,"procs":456,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.13.0-1027-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0601 11:19:52.828955  276679 start.go:125] virtualization: kvm guest
	I0601 11:19:52.831944  276679 out.go:177] * [default-k8s-different-port-20220601110654-6708] minikube v1.26.0-beta.1 on Ubuntu 20.04 (kvm/amd64)
	I0601 11:19:52.833439  276679 out.go:177]   - MINIKUBE_LOCATION=14079
	I0601 11:19:52.833372  276679 notify.go:193] Checking for updates...
	I0601 11:19:52.835007  276679 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0601 11:19:52.836578  276679 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig
	I0601 11:19:52.837966  276679 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube
	I0601 11:19:52.839440  276679 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0601 11:19:52.841215  276679 config.go:178] Loaded profile config "default-k8s-different-port-20220601110654-6708": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.6
	I0601 11:19:52.841578  276679 driver.go:358] Setting default libvirt URI to qemu:///system
	I0601 11:19:52.880823  276679 docker.go:137] docker version: linux-20.10.16
	I0601 11:19:52.880897  276679 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0601 11:19:52.978177  276679 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:76 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:44 OomKillDisable:true NGoroutines:44 SystemTime:2022-06-01 11:19:52.908721136 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1027-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662795776 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:20.10.16 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0601 11:19:52.978275  276679 docker.go:254] overlay module found
	I0601 11:19:52.981078  276679 out.go:177] * Using the docker driver based on existing profile
	I0601 11:19:52.982316  276679 start.go:284] selected driver: docker
	I0601 11:19:52.982326  276679 start.go:806] validating driver "docker" against &{Name:default-k8s-different-port-20220601110654-6708 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:default-k8s-different-port-
20220601110654-6708 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8444 KubernetesVersion:v1.23.6 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.5.1@sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true
system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0601 11:19:52.982412  276679 start.go:817] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0601 11:19:52.983242  276679 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0601 11:19:53.085320  276679 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:76 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:44 OomKillDisable:true NGoroutines:44 SystemTime:2022-06-01 11:19:53.012439643 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1027-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662795776 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:20.10.16 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0601 11:19:53.085561  276679 start_flags.go:847] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0601 11:19:53.085581  276679 cni.go:95] Creating CNI manager for ""
	I0601 11:19:53.085589  276679 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0601 11:19:53.085608  276679 start_flags.go:306] config:
	{Name:default-k8s-different-port-20220601110654-6708 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:default-k8s-different-port-20220601110654-6708 Namespace:default APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8444 KubernetesVersion:v1.23.6 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.5.1@sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] Lis
tenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0601 11:19:53.088575  276679 out.go:177] * Starting control plane node default-k8s-different-port-20220601110654-6708 in cluster default-k8s-different-port-20220601110654-6708
	I0601 11:19:53.089964  276679 cache.go:120] Beginning downloading kic base image for docker with containerd
	I0601 11:19:53.091501  276679 out.go:177] * Pulling base image ...
	I0601 11:19:53.092800  276679 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime containerd
	I0601 11:19:53.092839  276679 preload.go:148] Found local preload: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.6-containerd-overlay2-amd64.tar.lz4
	I0601 11:19:53.092856  276679 cache.go:57] Caching tarball of preloaded images
	I0601 11:19:53.092897  276679 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a in local docker daemon
	I0601 11:19:53.093061  276679 preload.go:174] Found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.6-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0601 11:19:53.093076  276679 cache.go:60] Finished verifying existence of preloaded tar for  v1.23.6 on containerd
	I0601 11:19:53.093182  276679 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/default-k8s-different-port-20220601110654-6708/config.json ...
	I0601 11:19:53.136384  276679 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a in local docker daemon, skipping pull
	I0601 11:19:53.136410  276679 cache.go:141] gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a exists in daemon, skipping load
	I0601 11:19:53.136424  276679 cache.go:206] Successfully downloaded all kic artifacts
	I0601 11:19:53.136454  276679 start.go:352] acquiring machines lock for default-k8s-different-port-20220601110654-6708: {Name:mk7500f636009412c286b3a5b3a2182fb6b229b2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0601 11:19:53.136550  276679 start.go:356] acquired machines lock for "default-k8s-different-port-20220601110654-6708" in 69.025µs
	I0601 11:19:53.136570  276679 start.go:94] Skipping create...Using existing machine configuration
	I0601 11:19:53.136577  276679 fix.go:55] fixHost starting: 
	I0601 11:19:53.137208  276679 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220601110654-6708 --format={{.State.Status}}
	I0601 11:19:53.168642  276679 fix.go:103] recreateIfNeeded on default-k8s-different-port-20220601110654-6708: state=Stopped err=<nil>
	W0601 11:19:53.168681  276679 fix.go:129] unexpected machine state, will restart: <nil>
	I0601 11:19:53.170972  276679 out.go:177] * Restarting existing docker container for "default-k8s-different-port-20220601110654-6708" ...
	I0601 11:19:53.172500  276679 cli_runner.go:164] Run: docker start default-k8s-different-port-20220601110654-6708
	I0601 11:19:53.580842  276679 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220601110654-6708 --format={{.State.Status}}
	I0601 11:19:53.615796  276679 kic.go:416] container "default-k8s-different-port-20220601110654-6708" state is running.
	I0601 11:19:53.616193  276679 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-different-port-20220601110654-6708
	I0601 11:19:53.647308  276679 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/default-k8s-different-port-20220601110654-6708/config.json ...
	I0601 11:19:53.647503  276679 machine.go:88] provisioning docker machine ...
	I0601 11:19:53.647526  276679 ubuntu.go:169] provisioning hostname "default-k8s-different-port-20220601110654-6708"
	I0601 11:19:53.647560  276679 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220601110654-6708
	I0601 11:19:53.679842  276679 main.go:134] libmachine: Using SSH client type: native
	I0601 11:19:53.680106  276679 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7da240] 0x7dd2a0 <nil>  [] 0s} 127.0.0.1 49442 <nil> <nil>}
	I0601 11:19:53.680131  276679 main.go:134] libmachine: About to run SSH command:
	sudo hostname default-k8s-different-port-20220601110654-6708 && echo "default-k8s-different-port-20220601110654-6708" | sudo tee /etc/hostname
	I0601 11:19:53.680742  276679 main.go:134] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:55946->127.0.0.1:49442: read: connection reset by peer
	I0601 11:19:56.807880  276679 main.go:134] libmachine: SSH cmd err, output: <nil>: default-k8s-different-port-20220601110654-6708
	
	I0601 11:19:56.807951  276679 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220601110654-6708
	I0601 11:19:56.839321  276679 main.go:134] libmachine: Using SSH client type: native
	I0601 11:19:56.839475  276679 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7da240] 0x7dd2a0 <nil>  [] 0s} 127.0.0.1 49442 <nil> <nil>}
	I0601 11:19:56.839510  276679 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-different-port-20220601110654-6708' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-different-port-20220601110654-6708/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-different-port-20220601110654-6708' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0601 11:19:56.951445  276679 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0601 11:19:56.951473  276679 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/c
erts/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube}
	I0601 11:19:56.951491  276679 ubuntu.go:177] setting up certificates
	I0601 11:19:56.951499  276679 provision.go:83] configureAuth start
	I0601 11:19:56.951539  276679 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-different-port-20220601110654-6708
	I0601 11:19:56.982392  276679 provision.go:138] copyHostCerts
	I0601 11:19:56.982451  276679 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.pem, removing ...
	I0601 11:19:56.982464  276679 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.pem
	I0601 11:19:56.982537  276679 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.pem (1082 bytes)
	I0601 11:19:56.982652  276679 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cert.pem, removing ...
	I0601 11:19:56.982664  276679 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cert.pem
	I0601 11:19:56.982697  276679 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cert.pem (1123 bytes)
	I0601 11:19:56.982789  276679 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/key.pem, removing ...
	I0601 11:19:56.982802  276679 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/key.pem
	I0601 11:19:56.982829  276679 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/key.pem (1679 bytes)
	I0601 11:19:56.982876  276679 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca-key.pem org=jenkins.default-k8s-different-port-20220601110654-6708 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube default-k8s-different-port-20220601110654-6708]
	I0601 11:19:57.067574  276679 provision.go:172] copyRemoteCerts
	I0601 11:19:57.067626  276679 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0601 11:19:57.067654  276679 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220601110654-6708
	I0601 11:19:57.098669  276679 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49442 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/default-k8s-different-port-20220601110654-6708/id_rsa Username:docker}
	I0601 11:19:57.182904  276679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0601 11:19:57.199734  276679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/server.pem --> /etc/docker/server.pem (1306 bytes)
	I0601 11:19:57.215838  276679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0601 11:19:57.232284  276679 provision.go:86] duration metric: configureAuth took 280.774927ms
	I0601 11:19:57.232312  276679 ubuntu.go:193] setting minikube options for container-runtime
	I0601 11:19:57.232468  276679 config.go:178] Loaded profile config "default-k8s-different-port-20220601110654-6708": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.6
	I0601 11:19:57.232480  276679 machine.go:91] provisioned docker machine in 3.584963826s
	I0601 11:19:57.232486  276679 start.go:306] post-start starting for "default-k8s-different-port-20220601110654-6708" (driver="docker")
	I0601 11:19:57.232492  276679 start.go:316] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0601 11:19:57.232530  276679 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0601 11:19:57.232572  276679 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220601110654-6708
	I0601 11:19:57.265048  276679 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49442 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/default-k8s-different-port-20220601110654-6708/id_rsa Username:docker}
	I0601 11:19:57.351029  276679 ssh_runner.go:195] Run: cat /etc/os-release
	I0601 11:19:57.353646  276679 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0601 11:19:57.353677  276679 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0601 11:19:57.353687  276679 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0601 11:19:57.353695  276679 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0601 11:19:57.353706  276679 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/addons for local assets ...
	I0601 11:19:57.353765  276679 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files for local assets ...
	I0601 11:19:57.353858  276679 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files/etc/ssl/certs/67082.pem -> 67082.pem in /etc/ssl/certs
	I0601 11:19:57.353951  276679 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0601 11:19:57.360153  276679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files/etc/ssl/certs/67082.pem --> /etc/ssl/certs/67082.pem (1708 bytes)
	I0601 11:19:57.376881  276679 start.go:309] post-start completed in 144.384989ms
	I0601 11:19:57.376932  276679 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0601 11:19:57.376962  276679 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220601110654-6708
	I0601 11:19:57.411118  276679 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49442 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/default-k8s-different-port-20220601110654-6708/id_rsa Username:docker}
	I0601 11:19:57.496188  276679 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0601 11:19:57.499982  276679 fix.go:57] fixHost completed within 4.363400058s
	I0601 11:19:57.500005  276679 start.go:81] releasing machines lock for "default-k8s-different-port-20220601110654-6708", held for 4.363442227s
	I0601 11:19:57.500082  276679 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-different-port-20220601110654-6708
	I0601 11:19:57.532057  276679 ssh_runner.go:195] Run: systemctl --version
	I0601 11:19:57.532107  276679 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220601110654-6708
	I0601 11:19:57.532107  276679 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0601 11:19:57.532168  276679 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220601110654-6708
	I0601 11:19:57.567039  276679 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49442 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/default-k8s-different-port-20220601110654-6708/id_rsa Username:docker}
	I0601 11:19:57.567550  276679 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49442 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/default-k8s-different-port-20220601110654-6708/id_rsa Username:docker}
	I0601 11:19:57.677865  276679 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0601 11:19:57.688848  276679 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0601 11:19:57.697588  276679 docker.go:187] disabling docker service ...
	I0601 11:19:57.697632  276679 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0601 11:19:57.706476  276679 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0601 11:19:57.714826  276679 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0601 11:19:57.791919  276679 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0601 11:19:57.865357  276679 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0601 11:19:57.874183  276679 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0601 11:19:57.886120  276679 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*sandbox_image = .*$|sandbox_image = "k8s.gcr.io/pause:3.6"|' -i /etc/containerd/config.toml"
	I0601 11:19:57.893706  276679 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*restrict_oom_score_adj = .*$|restrict_oom_score_adj = false|' -i /etc/containerd/config.toml"
	I0601 11:19:57.901159  276679 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*SystemdCgroup = .*$|SystemdCgroup = false|' -i /etc/containerd/config.toml"
	I0601 11:19:57.908873  276679 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*conf_dir = .*$|conf_dir = "/etc/cni/net.mk"|' -i /etc/containerd/config.toml"
	I0601 11:19:57.916512  276679 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^# imports|imports = ["/etc/containerd/containerd.conf.d/02-containerd.conf"]|' -i /etc/containerd/config.toml"
	I0601 11:19:57.923712  276679 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc/containerd/containerd.conf.d && printf %s "dmVyc2lvbiA9IDIK" | base64 -d | sudo tee /etc/containerd/containerd.conf.d/02-containerd.conf"
	I0601 11:19:57.935738  276679 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0601 11:19:57.941802  276679 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0601 11:19:57.947777  276679 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0601 11:19:58.021579  276679 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0601 11:19:58.089337  276679 start.go:447] Will wait 60s for socket path /run/containerd/containerd.sock
	I0601 11:19:58.089424  276679 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0601 11:19:58.092751  276679 start.go:468] Will wait 60s for crictl version
	I0601 11:19:58.092798  276679 ssh_runner.go:195] Run: sudo crictl version
	I0601 11:19:58.116611  276679 retry.go:31] will retry after 11.04660288s: Temporary Error: sudo crictl version: Process exited with status 1
	stdout:
	
	stderr:
	time="2022-06-01T11:19:58Z" level=fatal msg="getting the runtime version: rpc error: code = Unknown desc = server is not initialized yet"
	I0601 11:20:09.163975  276679 ssh_runner.go:195] Run: sudo crictl version
	I0601 11:20:09.186613  276679 start.go:477] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.6.4
	RuntimeApiVersion:  v1alpha2
	I0601 11:20:09.186676  276679 ssh_runner.go:195] Run: containerd --version
	I0601 11:20:09.214385  276679 ssh_runner.go:195] Run: containerd --version
	I0601 11:20:09.243587  276679 out.go:177] * Preparing Kubernetes v1.23.6 on containerd 1.6.4 ...
	I0601 11:20:09.245245  276679 cli_runner.go:164] Run: docker network inspect default-k8s-different-port-20220601110654-6708 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0601 11:20:09.276501  276679 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0601 11:20:09.279800  276679 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0601 11:20:09.290992  276679 out.go:177]   - kubelet.cni-conf-dir=/etc/cni/net.mk
	I0601 11:20:09.292426  276679 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime containerd
	I0601 11:20:09.292493  276679 ssh_runner.go:195] Run: sudo crictl images --output json
	I0601 11:20:09.315170  276679 containerd.go:547] all images are preloaded for containerd runtime.
	I0601 11:20:09.315189  276679 containerd.go:461] Images already preloaded, skipping extraction
	I0601 11:20:09.315224  276679 ssh_runner.go:195] Run: sudo crictl images --output json
	I0601 11:20:09.338119  276679 containerd.go:547] all images are preloaded for containerd runtime.
	I0601 11:20:09.338137  276679 cache_images.go:84] Images are preloaded, skipping loading
	I0601 11:20:09.338184  276679 ssh_runner.go:195] Run: sudo crictl info
	I0601 11:20:09.360773  276679 cni.go:95] Creating CNI manager for ""
	I0601 11:20:09.360799  276679 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0601 11:20:09.360817  276679 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0601 11:20:09.360831  276679 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8444 KubernetesVersion:v1.23.6 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-different-port-20220601110654-6708 NodeName:default-k8s-different-port-20220601110654-6708 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.49.2
CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0601 11:20:09.360999  276679 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "default-k8s-different-port-20220601110654-6708"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.23.6
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0601 11:20:09.361105  276679 kubeadm.go:961] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.23.6/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cni-conf-dir=/etc/cni/net.mk --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=default-k8s-different-port-20220601110654-6708 --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.49.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.23.6 ClusterName:default-k8s-different-port-20220601110654-6708 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:}
	I0601 11:20:09.361162  276679 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.23.6
	I0601 11:20:09.368101  276679 binaries.go:44] Found k8s binaries, skipping transfer
	I0601 11:20:09.368169  276679 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0601 11:20:09.374382  276679 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (591 bytes)
	I0601 11:20:09.386282  276679 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0601 11:20:09.398188  276679 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2075 bytes)
	I0601 11:20:09.409736  276679 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0601 11:20:09.412458  276679 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0601 11:20:09.420789  276679 certs.go:54] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/default-k8s-different-port-20220601110654-6708 for IP: 192.168.49.2
	I0601 11:20:09.420897  276679 certs.go:182] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.key
	I0601 11:20:09.420940  276679 certs.go:182] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/proxy-client-ca.key
	I0601 11:20:09.421000  276679 certs.go:298] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/default-k8s-different-port-20220601110654-6708/client.key
	I0601 11:20:09.421053  276679 certs.go:298] skipping minikube signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/default-k8s-different-port-20220601110654-6708/apiserver.key.dd3b5fb2
	I0601 11:20:09.421088  276679 certs.go:298] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/default-k8s-different-port-20220601110654-6708/proxy-client.key
	I0601 11:20:09.421176  276679 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/6708.pem (1338 bytes)
	W0601 11:20:09.421205  276679 certs.go:384] ignoring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/6708_empty.pem, impossibly tiny 0 bytes
	I0601 11:20:09.421216  276679 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca-key.pem (1679 bytes)
	I0601 11:20:09.421244  276679 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca.pem (1082 bytes)
	I0601 11:20:09.421270  276679 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/cert.pem (1123 bytes)
	I0601 11:20:09.421298  276679 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/key.pem (1679 bytes)
	I0601 11:20:09.421334  276679 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files/etc/ssl/certs/67082.pem (1708 bytes)
	I0601 11:20:09.421917  276679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/default-k8s-different-port-20220601110654-6708/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0601 11:20:09.438490  276679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/default-k8s-different-port-20220601110654-6708/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0601 11:20:09.454711  276679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/default-k8s-different-port-20220601110654-6708/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0601 11:20:09.471469  276679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/default-k8s-different-port-20220601110654-6708/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0601 11:20:09.488271  276679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0601 11:20:09.504375  276679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0601 11:20:09.520473  276679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0601 11:20:09.536663  276679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0601 11:20:09.552725  276679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0601 11:20:09.568724  276679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/6708.pem --> /usr/share/ca-certificates/6708.pem (1338 bytes)
	I0601 11:20:09.584711  276679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files/etc/ssl/certs/67082.pem --> /usr/share/ca-certificates/67082.pem (1708 bytes)
	I0601 11:20:09.600406  276679 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0601 11:20:09.611814  276679 ssh_runner.go:195] Run: openssl version
	I0601 11:20:09.616280  276679 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0601 11:20:09.623058  276679 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0601 11:20:09.625881  276679 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Jun  1 10:20 /usr/share/ca-certificates/minikubeCA.pem
	I0601 11:20:09.625913  276679 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0601 11:20:09.630367  276679 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0601 11:20:09.636712  276679 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6708.pem && ln -fs /usr/share/ca-certificates/6708.pem /etc/ssl/certs/6708.pem"
	I0601 11:20:09.643407  276679 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6708.pem
	I0601 11:20:09.646316  276679 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Jun  1 10:24 /usr/share/ca-certificates/6708.pem
	I0601 11:20:09.646366  276679 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6708.pem
	I0601 11:20:09.650791  276679 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6708.pem /etc/ssl/certs/51391683.0"
	I0601 11:20:09.657126  276679 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/67082.pem && ln -fs /usr/share/ca-certificates/67082.pem /etc/ssl/certs/67082.pem"
	I0601 11:20:09.663990  276679 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/67082.pem
	I0601 11:20:09.666934  276679 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Jun  1 10:24 /usr/share/ca-certificates/67082.pem
	I0601 11:20:09.666966  276679 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/67082.pem
	I0601 11:20:09.671359  276679 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/67082.pem /etc/ssl/certs/3ec20f2e.0"
	I0601 11:20:09.677573  276679 kubeadm.go:395] StartCluster: {Name:default-k8s-different-port-20220601110654-6708 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:default-k8s-different-port-20220601110654-6708
Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8444 KubernetesVersion:v1.23.6 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.5.1@sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] S
tartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0601 11:20:09.677668  276679 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0601 11:20:09.677695  276679 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0601 11:20:09.700805  276679 cri.go:87] found id: "fec5fcb4aee4a111d94d448605edbdddd36bb33e1c2c06fad87be11f5157efdd"
	I0601 11:20:09.700825  276679 cri.go:87] found id: "313035e9674ffbf03bc7f81b4786fb43b0ffff7b5720ab0e88a6bb8f52d6087d"
	I0601 11:20:09.700835  276679 cri.go:87] found id: "f9746f111b56ac0244039e7b095ebf29999ccb20c1bf5e1ba484273bdb9e0d90"
	I0601 11:20:09.700844  276679 cri.go:87] found id: "0b15aeee4f5511a740a4f3f9ebbba9758b6b511b072da616c68a21c118c7790e"
	I0601 11:20:09.700853  276679 cri.go:87] found id: "627fd5c08820c1666f69a50ff3cc02a6eee73709048b46a0243143bf89dde787"
	I0601 11:20:09.700863  276679 cri.go:87] found id: "6ce85ae821e03fdd8bd07541eda8ea822ee62a21dc41c68bec58c5695d43fb44"
	I0601 11:20:09.700870  276679 cri.go:87] found id: ""
	I0601 11:20:09.700900  276679 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0601 11:20:09.711953  276679 cri.go:114] JSON = null
	W0601 11:20:09.711995  276679 kubeadm.go:402] unpause failed: list paused: list returned 0 containers, but ps returned 6
	I0601 11:20:09.712052  276679 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0601 11:20:09.718628  276679 kubeadm.go:410] found existing configuration files, will attempt cluster restart
	I0601 11:20:09.718649  276679 kubeadm.go:626] restartCluster start
	I0601 11:20:09.718687  276679 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0601 11:20:09.724992  276679 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0601 11:20:09.725567  276679 kubeconfig.go:116] verify returned: extract IP: "default-k8s-different-port-20220601110654-6708" does not appear in /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig
	I0601 11:20:09.725941  276679 kubeconfig.go:127] "default-k8s-different-port-20220601110654-6708" context is missing from /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig - will repair!
	I0601 11:20:09.726552  276679 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig: {Name:mkb0e16236e54fbc8651999f1dd70854c53de7db Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0601 11:20:09.727803  276679 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0601 11:20:09.734151  276679 api_server.go:165] Checking apiserver status ...
	I0601 11:20:09.734186  276679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 11:20:09.741699  276679 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 11:20:09.942065  276679 api_server.go:165] Checking apiserver status ...
	I0601 11:20:09.942125  276679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 11:20:09.950479  276679 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 11:20:10.142775  276679 api_server.go:165] Checking apiserver status ...
	I0601 11:20:10.142860  276679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 11:20:10.151184  276679 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 11:20:10.342428  276679 api_server.go:165] Checking apiserver status ...
	I0601 11:20:10.342511  276679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 11:20:10.350942  276679 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 11:20:10.542230  276679 api_server.go:165] Checking apiserver status ...
	I0601 11:20:10.542324  276679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 11:20:10.550731  276679 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 11:20:10.741765  276679 api_server.go:165] Checking apiserver status ...
	I0601 11:20:10.741840  276679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 11:20:10.750184  276679 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 11:20:10.942518  276679 api_server.go:165] Checking apiserver status ...
	I0601 11:20:10.942589  276679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 11:20:10.951137  276679 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 11:20:11.142442  276679 api_server.go:165] Checking apiserver status ...
	I0601 11:20:11.142519  276679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 11:20:11.151332  276679 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 11:20:11.342632  276679 api_server.go:165] Checking apiserver status ...
	I0601 11:20:11.342693  276679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 11:20:11.351149  276679 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 11:20:11.542423  276679 api_server.go:165] Checking apiserver status ...
	I0601 11:20:11.542483  276679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 11:20:11.550625  276679 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 11:20:11.741869  276679 api_server.go:165] Checking apiserver status ...
	I0601 11:20:11.741945  276679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 11:20:11.750554  276679 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 11:20:11.942776  276679 api_server.go:165] Checking apiserver status ...
	I0601 11:20:11.942855  276679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 11:20:11.951226  276679 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 11:20:12.142534  276679 api_server.go:165] Checking apiserver status ...
	I0601 11:20:12.142617  276679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 11:20:12.151065  276679 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 11:20:12.342354  276679 api_server.go:165] Checking apiserver status ...
	I0601 11:20:12.342429  276679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 11:20:12.350855  276679 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 11:20:12.542142  276679 api_server.go:165] Checking apiserver status ...
	I0601 11:20:12.542207  276679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 11:20:12.550615  276679 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 11:20:12.741824  276679 api_server.go:165] Checking apiserver status ...
	I0601 11:20:12.741894  276679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 11:20:12.750511  276679 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 11:20:12.750537  276679 api_server.go:165] Checking apiserver status ...
	I0601 11:20:12.750569  276679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 11:20:12.758099  276679 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 11:20:12.758124  276679 kubeadm.go:601] needs reconfigure: apiserver error: timed out waiting for the condition
	I0601 11:20:12.758131  276679 kubeadm.go:1092] stopping kube-system containers ...
	I0601 11:20:12.758146  276679 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I0601 11:20:12.758196  276679 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0601 11:20:12.782896  276679 cri.go:87] found id: "fec5fcb4aee4a111d94d448605edbdddd36bb33e1c2c06fad87be11f5157efdd"
	I0601 11:20:12.782918  276679 cri.go:87] found id: "313035e9674ffbf03bc7f81b4786fb43b0ffff7b5720ab0e88a6bb8f52d6087d"
	I0601 11:20:12.782924  276679 cri.go:87] found id: "f9746f111b56ac0244039e7b095ebf29999ccb20c1bf5e1ba484273bdb9e0d90"
	I0601 11:20:12.782931  276679 cri.go:87] found id: "0b15aeee4f5511a740a4f3f9ebbba9758b6b511b072da616c68a21c118c7790e"
	I0601 11:20:12.782936  276679 cri.go:87] found id: "627fd5c08820c1666f69a50ff3cc02a6eee73709048b46a0243143bf89dde787"
	I0601 11:20:12.782943  276679 cri.go:87] found id: "6ce85ae821e03fdd8bd07541eda8ea822ee62a21dc41c68bec58c5695d43fb44"
	I0601 11:20:12.782948  276679 cri.go:87] found id: ""
	I0601 11:20:12.782955  276679 cri.go:232] Stopping containers: [fec5fcb4aee4a111d94d448605edbdddd36bb33e1c2c06fad87be11f5157efdd 313035e9674ffbf03bc7f81b4786fb43b0ffff7b5720ab0e88a6bb8f52d6087d f9746f111b56ac0244039e7b095ebf29999ccb20c1bf5e1ba484273bdb9e0d90 0b15aeee4f5511a740a4f3f9ebbba9758b6b511b072da616c68a21c118c7790e 627fd5c08820c1666f69a50ff3cc02a6eee73709048b46a0243143bf89dde787 6ce85ae821e03fdd8bd07541eda8ea822ee62a21dc41c68bec58c5695d43fb44]
	I0601 11:20:12.782994  276679 ssh_runner.go:195] Run: which crictl
	I0601 11:20:12.785799  276679 ssh_runner.go:195] Run: sudo /usr/bin/crictl stop fec5fcb4aee4a111d94d448605edbdddd36bb33e1c2c06fad87be11f5157efdd 313035e9674ffbf03bc7f81b4786fb43b0ffff7b5720ab0e88a6bb8f52d6087d f9746f111b56ac0244039e7b095ebf29999ccb20c1bf5e1ba484273bdb9e0d90 0b15aeee4f5511a740a4f3f9ebbba9758b6b511b072da616c68a21c118c7790e 627fd5c08820c1666f69a50ff3cc02a6eee73709048b46a0243143bf89dde787 6ce85ae821e03fdd8bd07541eda8ea822ee62a21dc41c68bec58c5695d43fb44
	I0601 11:20:12.809504  276679 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0601 11:20:12.819061  276679 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0601 11:20:12.825913  276679 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5643 Jun  1 11:07 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5656 Jun  1 11:07 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2123 Jun  1 11:07 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5604 Jun  1 11:07 /etc/kubernetes/scheduler.conf
	
	I0601 11:20:12.825968  276679 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0601 11:20:12.832916  276679 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0601 11:20:12.839178  276679 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0601 11:20:12.845567  276679 kubeadm.go:166] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0601 11:20:12.845605  276679 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0601 11:20:12.851603  276679 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0601 11:20:12.857919  276679 kubeadm.go:166] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0601 11:20:12.857967  276679 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0601 11:20:12.864112  276679 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0601 11:20:12.870523  276679 kubeadm.go:703] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0601 11:20:12.870540  276679 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0601 11:20:12.912381  276679 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0601 11:20:13.433508  276679 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0601 11:20:13.566844  276679 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0601 11:20:13.617762  276679 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0601 11:20:13.686212  276679 api_server.go:51] waiting for apiserver process to appear ...
	I0601 11:20:13.686269  276679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:20:14.195273  276679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:20:14.695296  276679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:20:15.195457  276679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:20:15.695544  276679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:20:16.195542  276679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:20:16.695465  276679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:20:17.195333  276679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:20:17.694666  276679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:20:18.194692  276679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:20:18.694918  276679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:20:19.195623  276679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:20:19.695137  276679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:20:19.758656  276679 api_server.go:71] duration metric: took 6.072444993s to wait for apiserver process to appear ...
	I0601 11:20:19.758687  276679 api_server.go:87] waiting for apiserver healthz status ...
	I0601 11:20:19.758700  276679 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8444/healthz ...
	I0601 11:20:22.369047  276679 api_server.go:266] https://192.168.49.2:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0601 11:20:22.369078  276679 api_server.go:102] status: https://192.168.49.2:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0601 11:20:22.869917  276679 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8444/healthz ...
	I0601 11:20:22.874561  276679 api_server.go:266] https://192.168.49.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0601 11:20:22.874589  276679 api_server.go:102] status: https://192.168.49.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0601 11:20:23.370203  276679 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8444/healthz ...
	I0601 11:20:23.375048  276679 api_server.go:266] https://192.168.49.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0601 11:20:23.375073  276679 api_server.go:102] status: https://192.168.49.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0601 11:20:23.869242  276679 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8444/healthz ...
	I0601 11:20:23.874012  276679 api_server.go:266] https://192.168.49.2:8444/healthz returned 200:
	ok
	I0601 11:20:23.879941  276679 api_server.go:140] control plane version: v1.23.6
	I0601 11:20:23.879963  276679 api_server.go:130] duration metric: took 4.121269797s to wait for apiserver health ...
	I0601 11:20:23.879972  276679 cni.go:95] Creating CNI manager for ""
	I0601 11:20:23.879977  276679 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0601 11:20:23.882052  276679 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0601 11:20:23.883460  276679 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0601 11:20:23.886921  276679 cni.go:189] applying CNI manifest using /var/lib/minikube/binaries/v1.23.6/kubectl ...
	I0601 11:20:23.886945  276679 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0601 11:20:23.899955  276679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0601 11:20:24.544438  276679 system_pods.go:43] waiting for kube-system pods to appear ...
	I0601 11:20:24.550979  276679 system_pods.go:59] 9 kube-system pods found
	I0601 11:20:24.551015  276679 system_pods.go:61] "coredns-64897985d-9gcj2" [28e98fca-a88b-422d-9f4b-797b18a8ff7a] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
	I0601 11:20:24.551025  276679 system_pods.go:61] "etcd-default-k8s-different-port-20220601110654-6708" [3005e651-1349-4d5e-b06f-e0fac3064ccf] Running
	I0601 11:20:24.551035  276679 system_pods.go:61] "kindnet-7fspq" [eefcd8e6-51e4-4d48-a420-93f4b47cf732] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0601 11:20:24.551042  276679 system_pods.go:61] "kube-apiserver-default-k8s-different-port-20220601110654-6708" [974fafdd-9176-4d97-acd7-9874d63b4987] Running
	I0601 11:20:24.551053  276679 system_pods.go:61] "kube-controller-manager-default-k8s-different-port-20220601110654-6708" [38b2c1a1-9a1a-4a1f-9fac-904e47d545be] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0601 11:20:24.551066  276679 system_pods.go:61] "kube-proxy-slzcl" [a1a6237f-6142-4e31-8bd4-72afd4f8a4c8] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0601 11:20:24.551083  276679 system_pods.go:61] "kube-scheduler-default-k8s-different-port-20220601110654-6708" [42ce6176-36e5-46bc-a443-19e4ca958785] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0601 11:20:24.551092  276679 system_pods.go:61] "metrics-server-b955d9d8-2k9wk" [fbc457b5-c359-4b84-abe5-d488874181f4] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
	I0601 11:20:24.551102  276679 system_pods.go:61] "storage-provisioner" [48086474-3417-47ff-970d-f7cf7806983b] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
	I0601 11:20:24.551112  276679 system_pods.go:74] duration metric: took 6.652373ms to wait for pod list to return data ...
	I0601 11:20:24.551126  276679 node_conditions.go:102] verifying NodePressure condition ...
	I0601 11:20:24.553819  276679 node_conditions.go:122] node storage ephemeral capacity is 304695084Ki
	I0601 11:20:24.553843  276679 node_conditions.go:123] node cpu capacity is 8
	I0601 11:20:24.553854  276679 node_conditions.go:105] duration metric: took 2.721044ms to run NodePressure ...
	I0601 11:20:24.553869  276679 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0601 11:20:24.680194  276679 kubeadm.go:762] waiting for restarted kubelet to initialise ...
	I0601 11:20:24.683686  276679 kubeadm.go:777] kubelet initialised
	I0601 11:20:24.683708  276679 kubeadm.go:778] duration metric: took 3.487172ms waiting for restarted kubelet to initialise ...
	I0601 11:20:24.683715  276679 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0601 11:20:24.689167  276679 pod_ready.go:78] waiting up to 4m0s for pod "coredns-64897985d-9gcj2" in "kube-system" namespace to be "Ready" ...
	I0601 11:20:26.694484  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:20:28.695017  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:20:30.695110  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:20:32.695566  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:20:35.195305  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:20:37.197596  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:20:39.695270  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:20:42.195160  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:20:44.694661  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:20:46.695274  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:20:48.696514  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:20:51.195247  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:20:53.694370  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:20:55.694640  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:20:57.695171  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:21:00.194872  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:21:02.195437  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:21:04.694423  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:21:06.695087  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:21:09.195174  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:21:11.694583  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:21:14.194681  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:21:16.694557  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:21:18.694796  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:21:21.194627  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:21:23.694527  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:21:25.694996  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:21:28.196140  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:21:30.694688  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:21:32.695236  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:21:35.195034  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:21:37.195304  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:21:39.694703  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:21:42.195994  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:21:44.695306  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:21:47.194624  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:21:49.195368  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:21:51.694946  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:21:54.194912  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:21:56.195652  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:21:58.694995  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:22:01.194431  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:22:03.195297  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:22:05.694312  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:22:07.695082  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:22:10.194760  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:22:12.194885  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:22:14.195226  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:22:16.694528  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:22:18.695235  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:22:21.194694  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:22:23.197530  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:22:25.695229  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:22:28.194771  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:22:30.195026  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:22:32.694696  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:22:34.694930  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:22:36.695375  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:22:39.194795  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:22:41.694750  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:22:44.195389  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:22:46.695489  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:22:49.194395  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:22:51.195245  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:22:53.195327  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:22:55.694893  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:22:58.194547  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:23:00.694762  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:23:03.195176  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:23:05.694698  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:23:07.695208  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:23:10.195039  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:23:12.695240  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:23:15.195155  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:23:17.195241  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:23:19.694620  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:23:21.694667  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:23:24.194510  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:23:26.194546  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:23:28.194917  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:23:30.694766  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:23:33.195328  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:23:35.694682  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:23:37.695340  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:23:40.194751  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:23:42.194853  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:23:44.695010  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:23:46.695526  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:23:49.194307  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:23:51.195053  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:23:53.195339  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:23:55.695153  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:23:58.194738  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:24:00.195407  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:24:02.695048  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:24:04.695337  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:24:07.194665  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:24:09.195069  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:24:11.694328  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:24:14.194996  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:24:16.694542  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:24:18.694668  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:24:20.695051  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:24:23.195952  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:24:24.691928  276679 pod_ready.go:81] duration metric: took 4m0.002724634s waiting for pod "coredns-64897985d-9gcj2" in "kube-system" namespace to be "Ready" ...
	E0601 11:24:24.691955  276679 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "coredns-64897985d-9gcj2" in "kube-system" namespace to be "Ready" (will not retry!)
	I0601 11:24:24.691981  276679 pod_ready.go:38] duration metric: took 4m0.008258762s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0601 11:24:24.692005  276679 kubeadm.go:630] restartCluster took 4m14.973349857s
	W0601 11:24:24.692130  276679 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0601 11:24:24.692159  276679 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I0601 11:24:26.286416  276679 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force": (1.594228976s)
	I0601 11:24:26.286489  276679 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0601 11:24:26.296314  276679 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0601 11:24:26.303059  276679 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0601 11:24:26.303116  276679 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0601 11:24:26.309917  276679 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0601 11:24:26.309957  276679 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0601 11:24:26.556270  276679 out.go:204]   - Generating certificates and keys ...
	I0601 11:24:27.302083  276679 out.go:204]   - Booting up control plane ...
	I0601 11:24:38.840585  276679 out.go:204]   - Configuring RBAC rules ...
	I0601 11:24:39.253770  276679 cni.go:95] Creating CNI manager for ""
	I0601 11:24:39.253791  276679 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0601 11:24:39.255739  276679 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0601 11:24:39.257207  276679 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0601 11:24:39.261207  276679 cni.go:189] applying CNI manifest using /var/lib/minikube/binaries/v1.23.6/kubectl ...
	I0601 11:24:39.261228  276679 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0601 11:24:39.273744  276679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0601 11:24:39.861493  276679 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0601 11:24:39.861573  276679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:24:39.861574  276679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl label nodes minikube.k8s.io/version=v1.26.0-beta.1 minikube.k8s.io/commit=4a356b2b7b41c6be3e1e342298908c27bb98ce92 minikube.k8s.io/name=default-k8s-different-port-20220601110654-6708 minikube.k8s.io/updated_at=2022_06_01T11_24_39_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:24:39.914842  276679 ops.go:34] apiserver oom_adj: -16
	I0601 11:24:39.914913  276679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:24:40.498901  276679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:24:40.998931  276679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:24:41.499031  276679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:24:41.998593  276679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:24:42.499160  276679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:24:42.998966  276679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:24:43.498638  276679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:24:43.998319  276679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:24:44.498531  276679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:24:44.998678  276679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:24:45.499193  276679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:24:45.998418  276679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:24:46.498985  276679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:24:46.998941  276679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:24:47.498945  276679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:24:47.999272  276679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:24:48.498439  276679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:24:48.999292  276679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:24:49.499272  276679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:24:49.998339  276679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:24:50.498332  276679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:24:50.999106  276679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:24:51.499296  276679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:24:51.998980  276679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:24:52.498623  276679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:24:52.998371  276679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:24:53.498515  276679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:24:53.594790  276679 kubeadm.go:1045] duration metric: took 13.733266896s to wait for elevateKubeSystemPrivileges.
	I0601 11:24:53.594820  276679 kubeadm.go:397] StartCluster complete in 4m43.917251881s
	I0601 11:24:53.594841  276679 settings.go:142] acquiring lock: {Name:mk20a847233fa50399ab0a24280bffb8d8dbd41b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0601 11:24:53.594938  276679 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig
	I0601 11:24:53.596907  276679 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig: {Name:mkb0e16236e54fbc8651999f1dd70854c53de7db Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0601 11:24:54.111475  276679 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "default-k8s-different-port-20220601110654-6708" rescaled to 1
	I0601 11:24:54.111547  276679 start.go:208] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8444 KubernetesVersion:v1.23.6 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0601 11:24:54.113711  276679 out.go:177] * Verifying Kubernetes components...
	I0601 11:24:54.111604  276679 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0601 11:24:54.111644  276679 addons.go:415] enableAddons start: toEnable=map[dashboard:true metrics-server:true], additional=[]
	I0601 11:24:54.111802  276679 config.go:178] Loaded profile config "default-k8s-different-port-20220601110654-6708": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.6
	I0601 11:24:54.115020  276679 addons.go:65] Setting storage-provisioner=true in profile "default-k8s-different-port-20220601110654-6708"
	I0601 11:24:54.115035  276679 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0601 11:24:54.115035  276679 addons.go:65] Setting metrics-server=true in profile "default-k8s-different-port-20220601110654-6708"
	I0601 11:24:54.115048  276679 addons.go:153] Setting addon storage-provisioner=true in "default-k8s-different-port-20220601110654-6708"
	I0601 11:24:54.115055  276679 addons.go:153] Setting addon metrics-server=true in "default-k8s-different-port-20220601110654-6708"
	W0601 11:24:54.115057  276679 addons.go:165] addon storage-provisioner should already be in state true
	W0601 11:24:54.115064  276679 addons.go:165] addon metrics-server should already be in state true
	I0601 11:24:54.115034  276679 addons.go:65] Setting default-storageclass=true in profile "default-k8s-different-port-20220601110654-6708"
	I0601 11:24:54.115103  276679 host.go:66] Checking if "default-k8s-different-port-20220601110654-6708" exists ...
	I0601 11:24:54.115109  276679 host.go:66] Checking if "default-k8s-different-port-20220601110654-6708" exists ...
	I0601 11:24:54.115112  276679 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-different-port-20220601110654-6708"
	I0601 11:24:54.115037  276679 addons.go:65] Setting dashboard=true in profile "default-k8s-different-port-20220601110654-6708"
	I0601 11:24:54.115134  276679 addons.go:153] Setting addon dashboard=true in "default-k8s-different-port-20220601110654-6708"
	W0601 11:24:54.115144  276679 addons.go:165] addon dashboard should already be in state true
	I0601 11:24:54.115176  276679 host.go:66] Checking if "default-k8s-different-port-20220601110654-6708" exists ...
	I0601 11:24:54.115416  276679 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220601110654-6708 --format={{.State.Status}}
	I0601 11:24:54.115596  276679 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220601110654-6708 --format={{.State.Status}}
	I0601 11:24:54.115611  276679 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220601110654-6708 --format={{.State.Status}}
	I0601 11:24:54.115615  276679 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220601110654-6708 --format={{.State.Status}}
	I0601 11:24:54.129176  276679 node_ready.go:35] waiting up to 6m0s for node "default-k8s-different-port-20220601110654-6708" to be "Ready" ...
	I0601 11:24:54.168194  276679 out.go:177]   - Using image kubernetesui/dashboard:v2.5.1
	I0601 11:24:54.169714  276679 out.go:177]   - Using image k8s.gcr.io/echoserver:1.4
	I0601 11:24:54.171144  276679 addons.go:348] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0601 11:24:54.170891  276679 addons.go:153] Setting addon default-storageclass=true in "default-k8s-different-port-20220601110654-6708"
	W0601 11:24:54.171181  276679 addons.go:165] addon default-storageclass should already be in state true
	I0601 11:24:54.171211  276679 host.go:66] Checking if "default-k8s-different-port-20220601110654-6708" exists ...
	I0601 11:24:54.171167  276679 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0601 11:24:54.171329  276679 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220601110654-6708
	I0601 11:24:54.171684  276679 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220601110654-6708 --format={{.State.Status}}
	I0601 11:24:54.176157  276679 out.go:177]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I0601 11:24:54.177770  276679 addons.go:348] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0601 11:24:54.177796  276679 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0601 11:24:54.179131  276679 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0601 11:24:54.177859  276679 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220601110654-6708
	I0601 11:24:54.180787  276679 addons.go:348] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0601 11:24:54.180809  276679 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0601 11:24:54.180855  276679 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220601110654-6708
	I0601 11:24:54.233206  276679 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49442 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/default-k8s-different-port-20220601110654-6708/id_rsa Username:docker}
	I0601 11:24:54.240234  276679 addons.go:348] installing /etc/kubernetes/addons/storageclass.yaml
	I0601 11:24:54.240263  276679 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0601 11:24:54.240311  276679 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220601110654-6708
	I0601 11:24:54.240743  276679 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49442 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/default-k8s-different-port-20220601110654-6708/id_rsa Username:docker}
	I0601 11:24:54.242497  276679 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49442 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/default-k8s-different-port-20220601110654-6708/id_rsa Username:docker}
	I0601 11:24:54.255476  276679 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0601 11:24:54.289597  276679 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49442 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/default-k8s-different-port-20220601110654-6708/id_rsa Username:docker}
	I0601 11:24:54.510589  276679 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0601 11:24:54.510747  276679 addons.go:348] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0601 11:24:54.510770  276679 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1849 bytes)
	I0601 11:24:54.556919  276679 addons.go:348] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0601 11:24:54.556950  276679 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0601 11:24:54.566012  276679 addons.go:348] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0601 11:24:54.566042  276679 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0601 11:24:54.569528  276679 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0601 11:24:54.576575  276679 addons.go:348] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0601 11:24:54.576625  276679 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0601 11:24:54.654525  276679 addons.go:348] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0601 11:24:54.654551  276679 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0601 11:24:54.655296  276679 addons.go:348] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0601 11:24:54.655319  276679 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0601 11:24:54.661290  276679 start.go:806] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS
	I0601 11:24:54.671592  276679 addons.go:348] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0601 11:24:54.671621  276679 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4196 bytes)
	I0601 11:24:54.673696  276679 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0601 11:24:54.687107  276679 addons.go:348] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0601 11:24:54.687133  276679 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0601 11:24:54.768961  276679 addons.go:348] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0601 11:24:54.768989  276679 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0601 11:24:54.854363  276679 addons.go:348] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0601 11:24:54.854399  276679 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0601 11:24:54.870735  276679 addons.go:348] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0601 11:24:54.870762  276679 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0601 11:24:54.888031  276679 addons.go:348] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0601 11:24:54.888063  276679 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0601 11:24:54.967082  276679 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0601 11:24:55.273650  276679 addons.go:386] Verifying addon metrics-server=true in "default-k8s-different-port-20220601110654-6708"
	I0601 11:24:55.661065  276679 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I0601 11:24:55.662561  276679 addons.go:417] enableAddons completed in 1.550935677s
	I0601 11:24:56.136034  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:24:58.136131  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:25:00.136759  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:25:02.636409  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:25:05.136779  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:25:07.635969  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:25:10.136336  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:25:12.636564  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:25:14.636694  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:25:17.137058  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:25:19.636331  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:25:22.136010  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:25:24.136501  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:25:26.636646  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:25:28.637161  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:25:31.135894  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:25:33.136655  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:25:35.635923  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:25:37.636131  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:25:39.636319  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:25:42.136004  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:25:44.136847  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:25:46.636704  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:25:49.136203  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:25:51.136808  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:25:53.636402  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:25:56.135580  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:25:58.135934  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:26:00.136698  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:26:02.136807  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:26:04.636360  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:26:07.136003  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:26:09.136403  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:26:11.636023  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:26:13.636284  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:26:16.136059  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:26:18.635976  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:26:20.636471  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:26:23.136420  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:26:25.635898  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:26:27.636092  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:26:29.636223  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:26:32.135814  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:26:34.136208  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:26:36.136320  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:26:38.635965  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:26:41.136884  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:26:43.636083  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:26:46.136237  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:26:48.635722  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:26:51.135780  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:26:53.136057  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:26:55.136925  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:26:57.636578  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:27:00.135989  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:27:02.136086  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:27:04.136153  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:27:06.635746  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:27:08.636054  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:27:10.636582  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:27:13.136118  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:27:15.137042  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:27:17.636192  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:27:20.136181  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:27:22.136256  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:27:24.136756  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:27:26.636114  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:27:28.636414  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:27:31.136248  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:27:33.136847  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:27:35.635813  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:27:37.636126  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:27:39.636375  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:27:42.136175  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:27:44.636682  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:27:47.135843  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:27:49.136252  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:27:51.137073  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:27:53.636035  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:27:55.636279  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:27:58.136943  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:28:00.635664  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:28:02.636502  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:28:04.638145  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:28:07.136842  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:28:09.636372  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:28:12.136048  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:28:14.136569  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:28:16.635705  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:28:18.636532  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:28:21.136177  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:28:23.636753  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:28:26.136524  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:28:28.635691  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:28:30.636561  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:28:33.136478  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:28:35.636196  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:28:38.137078  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:28:40.636164  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:28:42.636749  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:28:45.136427  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:28:47.636180  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:28:49.636861  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:28:52.136563  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:28:54.136714  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:28:54.138823  276679 node_ready.go:38] duration metric: took 4m0.0096115s waiting for node "default-k8s-different-port-20220601110654-6708" to be "Ready" ...
	I0601 11:28:54.141397  276679 out.go:177] 
	W0601 11:28:54.143025  276679 out.go:239] X Exiting due to GUEST_START: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: timed out waiting for the condition
	X Exiting due to GUEST_START: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: timed out waiting for the condition
	W0601 11:28:54.143041  276679 out.go:239] * 
	* 
	W0601 11:28:54.143750  276679 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0601 11:28:54.145729  276679 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:261: failed to start minikube post-stop. args "out/minikube-linux-amd64 start -p default-k8s-different-port-20220601110654-6708 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.23.6": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/default-k8s-different-port/serial/SecondStart]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect default-k8s-different-port-20220601110654-6708
helpers_test.go:235: (dbg) docker inspect default-k8s-different-port-20220601110654-6708:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "dccf9935a74c10fb8fa207fbc849bba86fe9f8dff98b2051cc49fbfa90e4ec8b",
	        "Created": "2022-06-01T11:07:03.290503902Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 276959,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-06-01T11:19:53.572720887Z",
	            "FinishedAt": "2022-06-01T11:19:52.302658787Z"
	        },
	        "Image": "sha256:5fc9565d342f677dd8987c0c7656d8d58147ab45c932c9076935b38a770e4cf7",
	        "ResolvConfPath": "/var/lib/docker/containers/dccf9935a74c10fb8fa207fbc849bba86fe9f8dff98b2051cc49fbfa90e4ec8b/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/dccf9935a74c10fb8fa207fbc849bba86fe9f8dff98b2051cc49fbfa90e4ec8b/hostname",
	        "HostsPath": "/var/lib/docker/containers/dccf9935a74c10fb8fa207fbc849bba86fe9f8dff98b2051cc49fbfa90e4ec8b/hosts",
	        "LogPath": "/var/lib/docker/containers/dccf9935a74c10fb8fa207fbc849bba86fe9f8dff98b2051cc49fbfa90e4ec8b/dccf9935a74c10fb8fa207fbc849bba86fe9f8dff98b2051cc49fbfa90e4ec8b-json.log",
	        "Name": "/default-k8s-different-port-20220601110654-6708",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-different-port-20220601110654-6708:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default-k8s-different-port-20220601110654-6708",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/ec30b29d13ab7f3810bf40e3fa416d096637b34a6f7b5a750bd7d391c0a4008e-init/diff:/var/lib/docker/overlay2/b8d8db1a2aa7e68bc30cc8784920412f67de75d287f26f41df14fe77e9cd01aa/diff:/var/lib/docker/overlay2/e05914bc392f34941f075ddf1f08984ba653e41980a3a12b3f0ec7bc66faed2c/diff:/var/lib/docker/overlay2/311290be1e68cfaf248422398f2ee037b662e83e8a80c393cf1f35af46c871e3/diff:/var/lib/docker/overlay2/df5bd86ed0e0c9740e1a389f0dd435837b730043ff123ee198aa28f82511f21c/diff:/var/lib/docker/overlay2/52ebf0baed5a1f4378d84fd6b94c9c1756fef7c7837d0766f77ef29b2104f39c/diff:/var/lib/docker/overlay2/19fdc64f1948c51309fb85761a84e7787a06f91f43e3d554c0507c5291bda802/diff:/var/lib/docker/overlay2/8fbe94a5f5ec7e7b87e3294d917aaba04c0d5a1072a4876ca3171f7b416e78d1/diff:/var/lib/docker/overlay2/3f2aec9d3b6e64c9251bd84e36639908a944a57741768984d5a66f5f34ed2288/diff:/var/lib/docker/overlay2/f4b59ae0a52d88a0325a281689c43f769408e14f70b9c94b771e70af140d7538/diff:/var/lib/docker/overlay2/1b9610
0ef86fc38fba7ce796c89f6a0f0c16c7fc1a94ff1a7f2021a01dd5471e/diff:/var/lib/docker/overlay2/10388fac28961ac0e67628b98602c8c77b04d12b21cd25a8d9adc05c1261252b/diff:/var/lib/docker/overlay2/efcc44ba0e0e6acd23e74db49106211785c2519f22b858252b43b17e927095e4/diff:/var/lib/docker/overlay2/3dbc9b9ec4e689c631fadb88ec67ad1f44f1059642f0d9e9740fa4523681476a/diff:/var/lib/docker/overlay2/196519dbdfaceb51fe77bd0517b4a726c63133e2f1a44cc71539d859f920996f/diff:/var/lib/docker/overlay2/bf269014ddb2ded291bc1f7a299a168567e8ffb016d7d4ba5ad3681eb1c988ba/diff:/var/lib/docker/overlay2/dc3a1c86dd14e2cba77aa4a9424d61aa34efc817c2c14b8f79d66fb1c19132ea/diff:/var/lib/docker/overlay2/7151cfa81a30a8e2bacf0e7e97eb21e825844e9ee99d4d774c443cf4df3042bf/diff:/var/lib/docker/overlay2/4827c7b9079e215b353e51dd539d7e28bf73341ea0a494df4654e9fd1b53d16c/diff:/var/lib/docker/overlay2/2da0eda04306aaeacd19a5cc85b885230b88e74f1791bdb022ebf4b39d85fae2/diff:/var/lib/docker/overlay2/1a0bdf8971fb0a44ff0e168623dbbb2d84b8e93f6d20d9351efd68470e4d4851/diff:/var/lib/d
ocker/overlay2/9ced6f582fc4ce00064f1f5e6ac922d838648fe94afc8e015c309e04004f10ca/diff:/var/lib/docker/overlay2/dd6d4c3166eb565aff2516953d5a8877a204214f2436d414475132ae70429cf7/diff:/var/lib/docker/overlay2/a1ace060e85891d54b26ff4be9fcbce36ffbb15cc4061eb4ccf0add8f82783df/diff:/var/lib/docker/overlay2/bc8b93bfba93e7da2c573ae8b6405ebff526153f6a8b0659aebaf044dc7e8f43/diff:/var/lib/docker/overlay2/c6292624b658b5761ddc277e4b50f1bd9d32fb9a2ad4a01647d6482fa0d29eb3/diff:/var/lib/docker/overlay2/cfe8f35eeb0a80a3c747eac0dfd9195da1fa3d9f92f0b7866d46f3517f3b10ee/diff:/var/lib/docker/overlay2/90bc3b9378b5761de7575f1a82d48e4b4ebf50af153eafa5a565585e136b87f8/diff:/var/lib/docker/overlay2/e1b928a2483870df7fdf4adb3b4002f9effe1db7fbff925b24005f47290b2915/diff:/var/lib/docker/overlay2/4758c75ab63fd3f43ae7b654bc93566ded69acea0e92caf3043ef6eeeec9ca1b/diff:/var/lib/docker/overlay2/8558da5809877030d87982052e5011fb04b8bb9646d0f3c1d4aa10f2d7926592/diff:/var/lib/docker/overlay2/f6400cfae811f55736f55064c8949e18ac2dc1175a5149bb0382a79fa92
4f3f3/diff:/var/lib/docker/overlay2/875d5278ff8445b84261d4586c6a14fbbd9c13beff1fe9252591162e4d91a0dc/diff:/var/lib/docker/overlay2/12d9f85a229e1386e37fb609020fdcb5535c67ce811012fd646182559e4ee754/diff:/var/lib/docker/overlay2/d5c5b85272d7a8b0b62da488c8cba8d883a0adcfc9f1b2f6ad2f856b4f13e5f7/diff:/var/lib/docker/overlay2/5f6feb9e059c22491e287804f38f097fda984dc9376ba28ae810e13dcf27394f/diff:/var/lib/docker/overlay2/113a715b1135f09b959295e960aeaa36846ad54a6fe46fdd53d061bc3fe114e3/diff:/var/lib/docker/overlay2/265096a57a98130b8073aa41d0a22f0ada5a391943e490ac1c281a634e22cba0/diff:/var/lib/docker/overlay2/15f57e6dc9a6b382240a9335aae067454048b2eb6d0094f2d3c8c115be34311a/diff:/var/lib/docker/overlay2/45ca8d44d46a41b4493f52c0048d08b8d5ff4c1b28233ab8940f6422df1f1273/diff:/var/lib/docker/overlay2/d555005a13c251ef928389a1b06d251a17378cf3ec68af5a4d6c849412d3f69f/diff:/var/lib/docker/overlay2/80b8c06850dacfe2ca4db4e0fa4d2d0dd6997cf495a7f7da9b95a996694144c5/diff",
	                "MergedDir": "/var/lib/docker/overlay2/ec30b29d13ab7f3810bf40e3fa416d096637b34a6f7b5a750bd7d391c0a4008e/merged",
	                "UpperDir": "/var/lib/docker/overlay2/ec30b29d13ab7f3810bf40e3fa416d096637b34a6f7b5a750bd7d391c0a4008e/diff",
	                "WorkDir": "/var/lib/docker/overlay2/ec30b29d13ab7f3810bf40e3fa416d096637b34a6f7b5a750bd7d391c0a4008e/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-different-port-20220601110654-6708",
	                "Source": "/var/lib/docker/volumes/default-k8s-different-port-20220601110654-6708/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-different-port-20220601110654-6708",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-different-port-20220601110654-6708",
	                "name.minikube.sigs.k8s.io": "default-k8s-different-port-20220601110654-6708",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "627aaaeeaa419894172d2929261a1bd95129c59503b90707762ab0b61d080e8a",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49442"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49441"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49438"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49440"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49439"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/627aaaeeaa41",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-different-port-20220601110654-6708": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "dccf9935a74c",
	                        "default-k8s-different-port-20220601110654-6708"
	                    ],
	                    "NetworkID": "7d52ef0dc0855b59c05da2e66b25f4d0866ad1d653be1fa615e193dd86443771",
	                    "EndpointID": "6107b065ae8c8c99ec32f0643fe4776fd7bfb23a42439002519244e27fe4c287",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-different-port-20220601110654-6708 -n default-k8s-different-port-20220601110654-6708
helpers_test.go:244: <<< TestStartStop/group/default-k8s-different-port/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-different-port/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-different-port-20220601110654-6708 logs -n 25

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/SecondStart
helpers_test.go:252: TestStartStop/group/default-k8s-different-port/serial/SecondStart logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------------------------------|------------------------------------------------|---------|----------------|---------------------|---------------------|
	| Command |                            Args                            |                    Profile                     |  User   |    Version     |     Start Time      |      End Time       |
	|---------|------------------------------------------------------------|------------------------------------------------|---------|----------------|---------------------|---------------------|
	| addons  | enable dashboard -p                                        | old-k8s-version-20220601105850-6708            | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:11 UTC | 01 Jun 22 11:11 UTC |
	|         | old-k8s-version-20220601105850-6708                        |                                                |         |                |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                |         |                |                     |                     |
	| logs    | calico-20220601104839-6708                                 | calico-20220601104839-6708                     | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:14 UTC | 01 Jun 22 11:14 UTC |
	|         | logs -n 25                                                 |                                                |         |                |                     |                     |
	| delete  | -p calico-20220601104839-6708                              | calico-20220601104839-6708                     | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:14 UTC | 01 Jun 22 11:14 UTC |
	| start   | -p newest-cni-20220601111420-6708 --memory=2200            | newest-cni-20220601111420-6708                 | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:14 UTC | 01 Jun 22 11:15 UTC |
	|         | --alsologtostderr --wait=apiserver,system_pods,default_sa  |                                                |         |                |                     |                     |
	|         | --feature-gates ServerSideApply=true --network-plugin=cni  |                                                |         |                |                     |                     |
	|         | --extra-config=kubelet.network-plugin=cni                  |                                                |         |                |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |                                                |         |                |                     |                     |
	|         | --driver=docker  --container-runtime=containerd            |                                                |         |                |                     |                     |
	|         | --kubernetes-version=v1.23.6                               |                                                |         |                |                     |                     |
	| addons  | enable metrics-server -p                                   | newest-cni-20220601111420-6708                 | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:15 UTC | 01 Jun 22 11:15 UTC |
	|         | newest-cni-20220601111420-6708                             |                                                |         |                |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                                                |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                     |                                                |         |                |                     |                     |
	| stop    | -p                                                         | newest-cni-20220601111420-6708                 | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:15 UTC | 01 Jun 22 11:15 UTC |
	|         | newest-cni-20220601111420-6708                             |                                                |         |                |                     |                     |
	|         | --alsologtostderr -v=3                                     |                                                |         |                |                     |                     |
	| addons  | enable dashboard -p                                        | newest-cni-20220601111420-6708                 | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:15 UTC | 01 Jun 22 11:15 UTC |
	|         | newest-cni-20220601111420-6708                             |                                                |         |                |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                |         |                |                     |                     |
	| start   | -p newest-cni-20220601111420-6708 --memory=2200            | newest-cni-20220601111420-6708                 | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:15 UTC | 01 Jun 22 11:15 UTC |
	|         | --alsologtostderr --wait=apiserver,system_pods,default_sa  |                                                |         |                |                     |                     |
	|         | --feature-gates ServerSideApply=true --network-plugin=cni  |                                                |         |                |                     |                     |
	|         | --extra-config=kubelet.network-plugin=cni                  |                                                |         |                |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |                                                |         |                |                     |                     |
	|         | --driver=docker  --container-runtime=containerd            |                                                |         |                |                     |                     |
	|         | --kubernetes-version=v1.23.6                               |                                                |         |                |                     |                     |
	| ssh     | -p                                                         | newest-cni-20220601111420-6708                 | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:15 UTC | 01 Jun 22 11:15 UTC |
	|         | newest-cni-20220601111420-6708                             |                                                |         |                |                     |                     |
	|         | sudo crictl images -o json                                 |                                                |         |                |                     |                     |
	| pause   | -p                                                         | newest-cni-20220601111420-6708                 | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:15 UTC | 01 Jun 22 11:15 UTC |
	|         | newest-cni-20220601111420-6708                             |                                                |         |                |                     |                     |
	|         | --alsologtostderr -v=1                                     |                                                |         |                |                     |                     |
	| unpause | -p                                                         | newest-cni-20220601111420-6708                 | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:15 UTC | 01 Jun 22 11:16 UTC |
	|         | newest-cni-20220601111420-6708                             |                                                |         |                |                     |                     |
	|         | --alsologtostderr -v=1                                     |                                                |         |                |                     |                     |
	| delete  | -p                                                         | newest-cni-20220601111420-6708                 | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:16 UTC | 01 Jun 22 11:16 UTC |
	|         | newest-cni-20220601111420-6708                             |                                                |         |                |                     |                     |
	| delete  | -p                                                         | newest-cni-20220601111420-6708                 | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:16 UTC | 01 Jun 22 11:16 UTC |
	|         | newest-cni-20220601111420-6708                             |                                                |         |                |                     |                     |
	| logs    | embed-certs-20220601110327-6708                            | embed-certs-20220601110327-6708                | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:16 UTC | 01 Jun 22 11:16 UTC |
	|         | logs -n 25                                                 |                                                |         |                |                     |                     |
	| logs    | embed-certs-20220601110327-6708                            | embed-certs-20220601110327-6708                | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:16 UTC | 01 Jun 22 11:16 UTC |
	|         | logs -n 25                                                 |                                                |         |                |                     |                     |
	| addons  | enable metrics-server -p                                   | embed-certs-20220601110327-6708                | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:16 UTC | 01 Jun 22 11:16 UTC |
	|         | embed-certs-20220601110327-6708                            |                                                |         |                |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                                                |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                     |                                                |         |                |                     |                     |
	| stop    | -p                                                         | embed-certs-20220601110327-6708                | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:16 UTC | 01 Jun 22 11:16 UTC |
	|         | embed-certs-20220601110327-6708                            |                                                |         |                |                     |                     |
	|         | --alsologtostderr -v=3                                     |                                                |         |                |                     |                     |
	| addons  | enable dashboard -p                                        | embed-certs-20220601110327-6708                | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:16 UTC | 01 Jun 22 11:16 UTC |
	|         | embed-certs-20220601110327-6708                            |                                                |         |                |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                |         |                |                     |                     |
	| logs    | default-k8s-different-port-20220601110654-6708             | default-k8s-different-port-20220601110654-6708 | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:19 UTC | 01 Jun 22 11:19 UTC |
	|         | logs -n 25                                                 |                                                |         |                |                     |                     |
	| logs    | default-k8s-different-port-20220601110654-6708             | default-k8s-different-port-20220601110654-6708 | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:19 UTC | 01 Jun 22 11:19 UTC |
	|         | logs -n 25                                                 |                                                |         |                |                     |                     |
	| addons  | enable metrics-server -p                                   | default-k8s-different-port-20220601110654-6708 | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:19 UTC | 01 Jun 22 11:19 UTC |
	|         | default-k8s-different-port-20220601110654-6708             |                                                |         |                |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                                                |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                     |                                                |         |                |                     |                     |
	| stop    | -p                                                         | default-k8s-different-port-20220601110654-6708 | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:19 UTC | 01 Jun 22 11:19 UTC |
	|         | default-k8s-different-port-20220601110654-6708             |                                                |         |                |                     |                     |
	|         | --alsologtostderr -v=3                                     |                                                |         |                |                     |                     |
	| addons  | enable dashboard -p                                        | default-k8s-different-port-20220601110654-6708 | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:19 UTC | 01 Jun 22 11:19 UTC |
	|         | default-k8s-different-port-20220601110654-6708             |                                                |         |                |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                |         |                |                     |                     |
	| logs    | old-k8s-version-20220601105850-6708                        | old-k8s-version-20220601105850-6708            | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:21 UTC | 01 Jun 22 11:21 UTC |
	|         | logs -n 25                                                 |                                                |         |                |                     |                     |
	| logs    | embed-certs-20220601110327-6708                            | embed-certs-20220601110327-6708                | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:25 UTC | 01 Jun 22 11:25 UTC |
	|         | logs -n 25                                                 |                                                |         |                |                     |                     |
	|---------|------------------------------------------------------------|------------------------------------------------|---------|----------------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/06/01 11:19:52
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.18.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0601 11:19:52.827023  276679 out.go:296] Setting OutFile to fd 1 ...
	I0601 11:19:52.827225  276679 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0601 11:19:52.827237  276679 out.go:309] Setting ErrFile to fd 2...
	I0601 11:19:52.827242  276679 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0601 11:19:52.827359  276679 root.go:322] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/bin
	I0601 11:19:52.827588  276679 out.go:303] Setting JSON to false
	I0601 11:19:52.828890  276679 start.go:115] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":3747,"bootTime":1654078646,"procs":456,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.13.0-1027-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0601 11:19:52.828955  276679 start.go:125] virtualization: kvm guest
	I0601 11:19:52.831944  276679 out.go:177] * [default-k8s-different-port-20220601110654-6708] minikube v1.26.0-beta.1 on Ubuntu 20.04 (kvm/amd64)
	I0601 11:19:52.833439  276679 out.go:177]   - MINIKUBE_LOCATION=14079
	I0601 11:19:52.833372  276679 notify.go:193] Checking for updates...
	I0601 11:19:52.835007  276679 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0601 11:19:52.836578  276679 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig
	I0601 11:19:52.837966  276679 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube
	I0601 11:19:52.839440  276679 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0601 11:19:52.841215  276679 config.go:178] Loaded profile config "default-k8s-different-port-20220601110654-6708": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.6
	I0601 11:19:52.841578  276679 driver.go:358] Setting default libvirt URI to qemu:///system
	I0601 11:19:52.880823  276679 docker.go:137] docker version: linux-20.10.16
	I0601 11:19:52.880897  276679 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0601 11:19:52.978177  276679 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:76 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:44 OomKillDisable:true NGoroutines:44 SystemTime:2022-06-01 11:19:52.908721136 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1027-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662795776 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:20.10.16 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0601 11:19:52.978275  276679 docker.go:254] overlay module found
	I0601 11:19:52.981078  276679 out.go:177] * Using the docker driver based on existing profile
	I0601 11:19:52.982316  276679 start.go:284] selected driver: docker
	I0601 11:19:52.982326  276679 start.go:806] validating driver "docker" against &{Name:default-k8s-different-port-20220601110654-6708 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:default-k8s-different-port-
20220601110654-6708 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8444 KubernetesVersion:v1.23.6 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.5.1@sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true
system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0601 11:19:52.982412  276679 start.go:817] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0601 11:19:52.983242  276679 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0601 11:19:53.085320  276679 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:76 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:44 OomKillDisable:true NGoroutines:44 SystemTime:2022-06-01 11:19:53.012439643 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1027-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662795776 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:20.10.16 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0601 11:19:53.085561  276679 start_flags.go:847] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0601 11:19:53.085581  276679 cni.go:95] Creating CNI manager for ""
	I0601 11:19:53.085589  276679 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0601 11:19:53.085608  276679 start_flags.go:306] config:
	{Name:default-k8s-different-port-20220601110654-6708 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:default-k8s-different-port-20220601110654-6708 Namespace:default APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8444 KubernetesVersion:v1.23.6 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.5.1@sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] Lis
tenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0601 11:19:53.088575  276679 out.go:177] * Starting control plane node default-k8s-different-port-20220601110654-6708 in cluster default-k8s-different-port-20220601110654-6708
	I0601 11:19:53.089964  276679 cache.go:120] Beginning downloading kic base image for docker with containerd
	I0601 11:19:53.091501  276679 out.go:177] * Pulling base image ...
	I0601 11:19:53.092800  276679 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime containerd
	I0601 11:19:53.092839  276679 preload.go:148] Found local preload: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.6-containerd-overlay2-amd64.tar.lz4
	I0601 11:19:53.092856  276679 cache.go:57] Caching tarball of preloaded images
	I0601 11:19:53.092897  276679 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a in local docker daemon
	I0601 11:19:53.093061  276679 preload.go:174] Found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.6-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0601 11:19:53.093076  276679 cache.go:60] Finished verifying existence of preloaded tar for  v1.23.6 on containerd
	I0601 11:19:53.093182  276679 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/default-k8s-different-port-20220601110654-6708/config.json ...
	I0601 11:19:53.136384  276679 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a in local docker daemon, skipping pull
	I0601 11:19:53.136410  276679 cache.go:141] gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a exists in daemon, skipping load
	I0601 11:19:53.136424  276679 cache.go:206] Successfully downloaded all kic artifacts
	I0601 11:19:53.136454  276679 start.go:352] acquiring machines lock for default-k8s-different-port-20220601110654-6708: {Name:mk7500f636009412c286b3a5b3a2182fb6b229b2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0601 11:19:53.136550  276679 start.go:356] acquired machines lock for "default-k8s-different-port-20220601110654-6708" in 69.025µs
	I0601 11:19:53.136570  276679 start.go:94] Skipping create...Using existing machine configuration
	I0601 11:19:53.136577  276679 fix.go:55] fixHost starting: 
	I0601 11:19:53.137208  276679 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220601110654-6708 --format={{.State.Status}}
	I0601 11:19:53.168642  276679 fix.go:103] recreateIfNeeded on default-k8s-different-port-20220601110654-6708: state=Stopped err=<nil>
	W0601 11:19:53.168681  276679 fix.go:129] unexpected machine state, will restart: <nil>
	I0601 11:19:53.170972  276679 out.go:177] * Restarting existing docker container for "default-k8s-different-port-20220601110654-6708" ...
	I0601 11:19:50.719789  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:19:53.220276  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:19:53.243194  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:19:55.243470  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:19:53.172500  276679 cli_runner.go:164] Run: docker start default-k8s-different-port-20220601110654-6708
	I0601 11:19:53.580842  276679 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220601110654-6708 --format={{.State.Status}}
	I0601 11:19:53.615796  276679 kic.go:416] container "default-k8s-different-port-20220601110654-6708" state is running.
	I0601 11:19:53.616193  276679 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-different-port-20220601110654-6708
	I0601 11:19:53.647308  276679 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/default-k8s-different-port-20220601110654-6708/config.json ...
	I0601 11:19:53.647503  276679 machine.go:88] provisioning docker machine ...
	I0601 11:19:53.647526  276679 ubuntu.go:169] provisioning hostname "default-k8s-different-port-20220601110654-6708"
	I0601 11:19:53.647560  276679 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220601110654-6708
	I0601 11:19:53.679842  276679 main.go:134] libmachine: Using SSH client type: native
	I0601 11:19:53.680106  276679 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7da240] 0x7dd2a0 <nil>  [] 0s} 127.0.0.1 49442 <nil> <nil>}
	I0601 11:19:53.680131  276679 main.go:134] libmachine: About to run SSH command:
	sudo hostname default-k8s-different-port-20220601110654-6708 && echo "default-k8s-different-port-20220601110654-6708" | sudo tee /etc/hostname
	I0601 11:19:53.680742  276679 main.go:134] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:55946->127.0.0.1:49442: read: connection reset by peer
	I0601 11:19:56.807880  276679 main.go:134] libmachine: SSH cmd err, output: <nil>: default-k8s-different-port-20220601110654-6708
	
	I0601 11:19:56.807951  276679 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220601110654-6708
	I0601 11:19:56.839321  276679 main.go:134] libmachine: Using SSH client type: native
	I0601 11:19:56.839475  276679 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7da240] 0x7dd2a0 <nil>  [] 0s} 127.0.0.1 49442 <nil> <nil>}
	I0601 11:19:56.839510  276679 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-different-port-20220601110654-6708' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-different-port-20220601110654-6708/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-different-port-20220601110654-6708' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0601 11:19:56.951445  276679 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0601 11:19:56.951473  276679 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/c
erts/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube}
	I0601 11:19:56.951491  276679 ubuntu.go:177] setting up certificates
	I0601 11:19:56.951499  276679 provision.go:83] configureAuth start
	I0601 11:19:56.951539  276679 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-different-port-20220601110654-6708
	I0601 11:19:56.982392  276679 provision.go:138] copyHostCerts
	I0601 11:19:56.982451  276679 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.pem, removing ...
	I0601 11:19:56.982464  276679 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.pem
	I0601 11:19:56.982537  276679 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.pem (1082 bytes)
	I0601 11:19:56.982652  276679 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cert.pem, removing ...
	I0601 11:19:56.982664  276679 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cert.pem
	I0601 11:19:56.982697  276679 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cert.pem (1123 bytes)
	I0601 11:19:56.982789  276679 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/key.pem, removing ...
	I0601 11:19:56.982802  276679 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/key.pem
	I0601 11:19:56.982829  276679 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/key.pem (1679 bytes)
	I0601 11:19:56.982876  276679 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca-key.pem org=jenkins.default-k8s-different-port-20220601110654-6708 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube default-k8s-different-port-20220601110654-6708]
	I0601 11:19:57.067574  276679 provision.go:172] copyRemoteCerts
	I0601 11:19:57.067626  276679 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0601 11:19:57.067654  276679 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220601110654-6708
	I0601 11:19:57.098669  276679 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49442 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/default-k8s-different-port-20220601110654-6708/id_rsa Username:docker}
	I0601 11:19:57.182904  276679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0601 11:19:57.199734  276679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/server.pem --> /etc/docker/server.pem (1306 bytes)
	I0601 11:19:57.215838  276679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0601 11:19:57.232284  276679 provision.go:86] duration metric: configureAuth took 280.774927ms
	I0601 11:19:57.232312  276679 ubuntu.go:193] setting minikube options for container-runtime
	I0601 11:19:57.232468  276679 config.go:178] Loaded profile config "default-k8s-different-port-20220601110654-6708": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.6
	I0601 11:19:57.232480  276679 machine.go:91] provisioned docker machine in 3.584963826s
	I0601 11:19:57.232486  276679 start.go:306] post-start starting for "default-k8s-different-port-20220601110654-6708" (driver="docker")
	I0601 11:19:57.232492  276679 start.go:316] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0601 11:19:57.232530  276679 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0601 11:19:57.232572  276679 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220601110654-6708
	I0601 11:19:57.265048  276679 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49442 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/default-k8s-different-port-20220601110654-6708/id_rsa Username:docker}
	I0601 11:19:57.351029  276679 ssh_runner.go:195] Run: cat /etc/os-release
	I0601 11:19:57.353646  276679 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0601 11:19:57.353677  276679 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0601 11:19:57.353687  276679 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0601 11:19:57.353695  276679 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0601 11:19:57.353706  276679 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/addons for local assets ...
	I0601 11:19:57.353765  276679 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files for local assets ...
	I0601 11:19:57.353858  276679 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files/etc/ssl/certs/67082.pem -> 67082.pem in /etc/ssl/certs
	I0601 11:19:57.353951  276679 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0601 11:19:57.360153  276679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files/etc/ssl/certs/67082.pem --> /etc/ssl/certs/67082.pem (1708 bytes)
	I0601 11:19:57.376881  276679 start.go:309] post-start completed in 144.384989ms
	I0601 11:19:57.376932  276679 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0601 11:19:57.376962  276679 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220601110654-6708
	I0601 11:19:57.411118  276679 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49442 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/default-k8s-different-port-20220601110654-6708/id_rsa Username:docker}
	I0601 11:19:57.496188  276679 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0601 11:19:57.499982  276679 fix.go:57] fixHost completed within 4.363400058s
	I0601 11:19:57.500005  276679 start.go:81] releasing machines lock for "default-k8s-different-port-20220601110654-6708", held for 4.363442227s
	I0601 11:19:57.500082  276679 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-different-port-20220601110654-6708
	I0601 11:19:57.532057  276679 ssh_runner.go:195] Run: systemctl --version
	I0601 11:19:57.532107  276679 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220601110654-6708
	I0601 11:19:57.532107  276679 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0601 11:19:57.532168  276679 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220601110654-6708
	I0601 11:19:57.567039  276679 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49442 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/default-k8s-different-port-20220601110654-6708/id_rsa Username:docker}
	I0601 11:19:57.567550  276679 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49442 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/default-k8s-different-port-20220601110654-6708/id_rsa Username:docker}
	I0601 11:19:57.677865  276679 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0601 11:19:57.688848  276679 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0601 11:19:57.697588  276679 docker.go:187] disabling docker service ...
	I0601 11:19:57.697632  276679 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0601 11:19:57.706476  276679 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0601 11:19:57.714826  276679 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0601 11:19:57.791919  276679 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0601 11:19:55.719582  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:19:58.219607  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:19:57.743387  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:20:00.243011  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:19:57.865357  276679 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0601 11:19:57.874183  276679 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0601 11:19:57.886120  276679 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*sandbox_image = .*$|sandbox_image = "k8s.gcr.io/pause:3.6"|' -i /etc/containerd/config.toml"
	I0601 11:19:57.893706  276679 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*restrict_oom_score_adj = .*$|restrict_oom_score_adj = false|' -i /etc/containerd/config.toml"
	I0601 11:19:57.901159  276679 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*SystemdCgroup = .*$|SystemdCgroup = false|' -i /etc/containerd/config.toml"
	I0601 11:19:57.908873  276679 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*conf_dir = .*$|conf_dir = "/etc/cni/net.mk"|' -i /etc/containerd/config.toml"
	I0601 11:19:57.916512  276679 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^# imports|imports = ["/etc/containerd/containerd.conf.d/02-containerd.conf"]|' -i /etc/containerd/config.toml"
	I0601 11:19:57.923712  276679 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc/containerd/containerd.conf.d && printf %!s(MISSING) "dmVyc2lvbiA9IDIK" | base64 -d | sudo tee /etc/containerd/containerd.conf.d/02-containerd.conf"
	I0601 11:19:57.935738  276679 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0601 11:19:57.941802  276679 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0601 11:19:57.947777  276679 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0601 11:19:58.021579  276679 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0601 11:19:58.089337  276679 start.go:447] Will wait 60s for socket path /run/containerd/containerd.sock
	I0601 11:19:58.089424  276679 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0601 11:19:58.092751  276679 start.go:468] Will wait 60s for crictl version
	I0601 11:19:58.092798  276679 ssh_runner.go:195] Run: sudo crictl version
	I0601 11:19:58.116611  276679 retry.go:31] will retry after 11.04660288s: Temporary Error: sudo crictl version: Process exited with status 1
	stdout:
	
	stderr:
	time="2022-06-01T11:19:58Z" level=fatal msg="getting the runtime version: rpc error: code = Unknown desc = server is not initialized yet"
	I0601 11:20:00.719494  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:20:03.219487  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:20:02.243060  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:20:04.243463  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:20:06.244423  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:20:05.719159  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:20:07.719735  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:20:09.163975  276679 ssh_runner.go:195] Run: sudo crictl version
	I0601 11:20:09.186613  276679 start.go:477] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.6.4
	RuntimeApiVersion:  v1alpha2
	I0601 11:20:09.186676  276679 ssh_runner.go:195] Run: containerd --version
	I0601 11:20:09.214385  276679 ssh_runner.go:195] Run: containerd --version
	I0601 11:20:09.243587  276679 out.go:177] * Preparing Kubernetes v1.23.6 on containerd 1.6.4 ...
	I0601 11:20:09.245245  276679 cli_runner.go:164] Run: docker network inspect default-k8s-different-port-20220601110654-6708 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0601 11:20:09.276501  276679 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0601 11:20:09.279800  276679 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0601 11:20:09.290992  276679 out.go:177]   - kubelet.cni-conf-dir=/etc/cni/net.mk
	I0601 11:20:08.742836  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:20:11.242670  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:20:09.292426  276679 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime containerd
	I0601 11:20:09.292493  276679 ssh_runner.go:195] Run: sudo crictl images --output json
	I0601 11:20:09.315170  276679 containerd.go:547] all images are preloaded for containerd runtime.
	I0601 11:20:09.315189  276679 containerd.go:461] Images already preloaded, skipping extraction
	I0601 11:20:09.315224  276679 ssh_runner.go:195] Run: sudo crictl images --output json
	I0601 11:20:09.338119  276679 containerd.go:547] all images are preloaded for containerd runtime.
	I0601 11:20:09.338137  276679 cache_images.go:84] Images are preloaded, skipping loading
	I0601 11:20:09.338184  276679 ssh_runner.go:195] Run: sudo crictl info
	I0601 11:20:09.360773  276679 cni.go:95] Creating CNI manager for ""
	I0601 11:20:09.360799  276679 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0601 11:20:09.360817  276679 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0601 11:20:09.360831  276679 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8444 KubernetesVersion:v1.23.6 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-different-port-20220601110654-6708 NodeName:default-k8s-different-port-20220601110654-6708 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.49.2
CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0601 11:20:09.360999  276679 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "default-k8s-different-port-20220601110654-6708"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.23.6
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0601 11:20:09.361105  276679 kubeadm.go:961] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.23.6/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cni-conf-dir=/etc/cni/net.mk --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=default-k8s-different-port-20220601110654-6708 --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.49.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.23.6 ClusterName:default-k8s-different-port-20220601110654-6708 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:}
	I0601 11:20:09.361162  276679 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.23.6
	I0601 11:20:09.368101  276679 binaries.go:44] Found k8s binaries, skipping transfer
	I0601 11:20:09.368169  276679 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0601 11:20:09.374382  276679 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (591 bytes)
	I0601 11:20:09.386282  276679 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0601 11:20:09.398188  276679 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2075 bytes)
	I0601 11:20:09.409736  276679 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0601 11:20:09.412458  276679 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0601 11:20:09.420789  276679 certs.go:54] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/default-k8s-different-port-20220601110654-6708 for IP: 192.168.49.2
	I0601 11:20:09.420897  276679 certs.go:182] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.key
	I0601 11:20:09.420940  276679 certs.go:182] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/proxy-client-ca.key
	I0601 11:20:09.421000  276679 certs.go:298] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/default-k8s-different-port-20220601110654-6708/client.key
	I0601 11:20:09.421053  276679 certs.go:298] skipping minikube signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/default-k8s-different-port-20220601110654-6708/apiserver.key.dd3b5fb2
	I0601 11:20:09.421088  276679 certs.go:298] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/default-k8s-different-port-20220601110654-6708/proxy-client.key
	I0601 11:20:09.421176  276679 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/6708.pem (1338 bytes)
	W0601 11:20:09.421205  276679 certs.go:384] ignoring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/6708_empty.pem, impossibly tiny 0 bytes
	I0601 11:20:09.421216  276679 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca-key.pem (1679 bytes)
	I0601 11:20:09.421244  276679 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca.pem (1082 bytes)
	I0601 11:20:09.421270  276679 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/cert.pem (1123 bytes)
	I0601 11:20:09.421298  276679 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/key.pem (1679 bytes)
	I0601 11:20:09.421334  276679 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files/etc/ssl/certs/67082.pem (1708 bytes)
	I0601 11:20:09.421917  276679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/default-k8s-different-port-20220601110654-6708/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0601 11:20:09.438490  276679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/default-k8s-different-port-20220601110654-6708/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0601 11:20:09.454711  276679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/default-k8s-different-port-20220601110654-6708/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0601 11:20:09.471469  276679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/default-k8s-different-port-20220601110654-6708/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0601 11:20:09.488271  276679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0601 11:20:09.504375  276679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0601 11:20:09.520473  276679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0601 11:20:09.536663  276679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0601 11:20:09.552725  276679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0601 11:20:09.568724  276679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/6708.pem --> /usr/share/ca-certificates/6708.pem (1338 bytes)
	I0601 11:20:09.584711  276679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files/etc/ssl/certs/67082.pem --> /usr/share/ca-certificates/67082.pem (1708 bytes)
	I0601 11:20:09.600406  276679 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0601 11:20:09.611814  276679 ssh_runner.go:195] Run: openssl version
	I0601 11:20:09.616280  276679 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0601 11:20:09.623058  276679 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0601 11:20:09.625881  276679 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Jun  1 10:20 /usr/share/ca-certificates/minikubeCA.pem
	I0601 11:20:09.625913  276679 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0601 11:20:09.630367  276679 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0601 11:20:09.636712  276679 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6708.pem && ln -fs /usr/share/ca-certificates/6708.pem /etc/ssl/certs/6708.pem"
	I0601 11:20:09.643407  276679 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6708.pem
	I0601 11:20:09.646316  276679 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Jun  1 10:24 /usr/share/ca-certificates/6708.pem
	I0601 11:20:09.646366  276679 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6708.pem
	I0601 11:20:09.650791  276679 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6708.pem /etc/ssl/certs/51391683.0"
	I0601 11:20:09.657126  276679 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/67082.pem && ln -fs /usr/share/ca-certificates/67082.pem /etc/ssl/certs/67082.pem"
	I0601 11:20:09.663990  276679 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/67082.pem
	I0601 11:20:09.666934  276679 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Jun  1 10:24 /usr/share/ca-certificates/67082.pem
	I0601 11:20:09.666966  276679 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/67082.pem
	I0601 11:20:09.671359  276679 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/67082.pem /etc/ssl/certs/3ec20f2e.0"
	I0601 11:20:09.677573  276679 kubeadm.go:395] StartCluster: {Name:default-k8s-different-port-20220601110654-6708 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:default-k8s-different-port-20220601110654-6708
Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8444 KubernetesVersion:v1.23.6 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.5.1@sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] S
tartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0601 11:20:09.677668  276679 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0601 11:20:09.677695  276679 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0601 11:20:09.700805  276679 cri.go:87] found id: "fec5fcb4aee4a111d94d448605edbdddd36bb33e1c2c06fad87be11f5157efdd"
	I0601 11:20:09.700825  276679 cri.go:87] found id: "313035e9674ffbf03bc7f81b4786fb43b0ffff7b5720ab0e88a6bb8f52d6087d"
	I0601 11:20:09.700835  276679 cri.go:87] found id: "f9746f111b56ac0244039e7b095ebf29999ccb20c1bf5e1ba484273bdb9e0d90"
	I0601 11:20:09.700844  276679 cri.go:87] found id: "0b15aeee4f5511a740a4f3f9ebbba9758b6b511b072da616c68a21c118c7790e"
	I0601 11:20:09.700853  276679 cri.go:87] found id: "627fd5c08820c1666f69a50ff3cc02a6eee73709048b46a0243143bf89dde787"
	I0601 11:20:09.700863  276679 cri.go:87] found id: "6ce85ae821e03fdd8bd07541eda8ea822ee62a21dc41c68bec58c5695d43fb44"
	I0601 11:20:09.700870  276679 cri.go:87] found id: ""
	I0601 11:20:09.700900  276679 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0601 11:20:09.711953  276679 cri.go:114] JSON = null
	W0601 11:20:09.711995  276679 kubeadm.go:402] unpause failed: list paused: list returned 0 containers, but ps returned 6
	I0601 11:20:09.712052  276679 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0601 11:20:09.718628  276679 kubeadm.go:410] found existing configuration files, will attempt cluster restart
	I0601 11:20:09.718649  276679 kubeadm.go:626] restartCluster start
	I0601 11:20:09.718687  276679 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0601 11:20:09.724992  276679 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0601 11:20:09.725567  276679 kubeconfig.go:116] verify returned: extract IP: "default-k8s-different-port-20220601110654-6708" does not appear in /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig
	I0601 11:20:09.725941  276679 kubeconfig.go:127] "default-k8s-different-port-20220601110654-6708" context is missing from /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig - will repair!
	I0601 11:20:09.726552  276679 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig: {Name:mkb0e16236e54fbc8651999f1dd70854c53de7db Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0601 11:20:09.727803  276679 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0601 11:20:09.734151  276679 api_server.go:165] Checking apiserver status ...
	I0601 11:20:09.734186  276679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 11:20:09.741699  276679 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 11:20:09.942065  276679 api_server.go:165] Checking apiserver status ...
	I0601 11:20:09.942125  276679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 11:20:09.950479  276679 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 11:20:10.142775  276679 api_server.go:165] Checking apiserver status ...
	I0601 11:20:10.142860  276679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 11:20:10.151184  276679 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 11:20:10.342428  276679 api_server.go:165] Checking apiserver status ...
	I0601 11:20:10.342511  276679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 11:20:10.350942  276679 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 11:20:10.542230  276679 api_server.go:165] Checking apiserver status ...
	I0601 11:20:10.542324  276679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 11:20:10.550731  276679 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 11:20:10.741765  276679 api_server.go:165] Checking apiserver status ...
	I0601 11:20:10.741840  276679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 11:20:10.750184  276679 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 11:20:10.942518  276679 api_server.go:165] Checking apiserver status ...
	I0601 11:20:10.942589  276679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 11:20:10.951137  276679 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 11:20:11.142442  276679 api_server.go:165] Checking apiserver status ...
	I0601 11:20:11.142519  276679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 11:20:11.151332  276679 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 11:20:11.342632  276679 api_server.go:165] Checking apiserver status ...
	I0601 11:20:11.342693  276679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 11:20:11.351149  276679 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 11:20:11.542423  276679 api_server.go:165] Checking apiserver status ...
	I0601 11:20:11.542483  276679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 11:20:11.550625  276679 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 11:20:11.741869  276679 api_server.go:165] Checking apiserver status ...
	I0601 11:20:11.741945  276679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 11:20:11.750554  276679 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 11:20:11.942776  276679 api_server.go:165] Checking apiserver status ...
	I0601 11:20:11.942855  276679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 11:20:11.951226  276679 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 11:20:12.142534  276679 api_server.go:165] Checking apiserver status ...
	I0601 11:20:12.142617  276679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 11:20:12.151065  276679 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 11:20:12.342354  276679 api_server.go:165] Checking apiserver status ...
	I0601 11:20:12.342429  276679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 11:20:12.350855  276679 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 11:20:12.542142  276679 api_server.go:165] Checking apiserver status ...
	I0601 11:20:12.542207  276679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 11:20:12.550615  276679 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 11:20:12.741824  276679 api_server.go:165] Checking apiserver status ...
	I0601 11:20:12.741894  276679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 11:20:12.750511  276679 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 11:20:12.750537  276679 api_server.go:165] Checking apiserver status ...
	I0601 11:20:12.750569  276679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 11:20:12.758099  276679 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 11:20:12.758124  276679 kubeadm.go:601] needs reconfigure: apiserver error: timed out waiting for the condition
	I0601 11:20:12.758131  276679 kubeadm.go:1092] stopping kube-system containers ...
	I0601 11:20:12.758146  276679 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I0601 11:20:12.758196  276679 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0601 11:20:12.782896  276679 cri.go:87] found id: "fec5fcb4aee4a111d94d448605edbdddd36bb33e1c2c06fad87be11f5157efdd"
	I0601 11:20:12.782918  276679 cri.go:87] found id: "313035e9674ffbf03bc7f81b4786fb43b0ffff7b5720ab0e88a6bb8f52d6087d"
	I0601 11:20:12.782924  276679 cri.go:87] found id: "f9746f111b56ac0244039e7b095ebf29999ccb20c1bf5e1ba484273bdb9e0d90"
	I0601 11:20:12.782931  276679 cri.go:87] found id: "0b15aeee4f5511a740a4f3f9ebbba9758b6b511b072da616c68a21c118c7790e"
	I0601 11:20:12.782936  276679 cri.go:87] found id: "627fd5c08820c1666f69a50ff3cc02a6eee73709048b46a0243143bf89dde787"
	I0601 11:20:12.782943  276679 cri.go:87] found id: "6ce85ae821e03fdd8bd07541eda8ea822ee62a21dc41c68bec58c5695d43fb44"
	I0601 11:20:12.782948  276679 cri.go:87] found id: ""
	I0601 11:20:12.782955  276679 cri.go:232] Stopping containers: [fec5fcb4aee4a111d94d448605edbdddd36bb33e1c2c06fad87be11f5157efdd 313035e9674ffbf03bc7f81b4786fb43b0ffff7b5720ab0e88a6bb8f52d6087d f9746f111b56ac0244039e7b095ebf29999ccb20c1bf5e1ba484273bdb9e0d90 0b15aeee4f5511a740a4f3f9ebbba9758b6b511b072da616c68a21c118c7790e 627fd5c08820c1666f69a50ff3cc02a6eee73709048b46a0243143bf89dde787 6ce85ae821e03fdd8bd07541eda8ea822ee62a21dc41c68bec58c5695d43fb44]
	I0601 11:20:12.782994  276679 ssh_runner.go:195] Run: which crictl
	I0601 11:20:12.785799  276679 ssh_runner.go:195] Run: sudo /usr/bin/crictl stop fec5fcb4aee4a111d94d448605edbdddd36bb33e1c2c06fad87be11f5157efdd 313035e9674ffbf03bc7f81b4786fb43b0ffff7b5720ab0e88a6bb8f52d6087d f9746f111b56ac0244039e7b095ebf29999ccb20c1bf5e1ba484273bdb9e0d90 0b15aeee4f5511a740a4f3f9ebbba9758b6b511b072da616c68a21c118c7790e 627fd5c08820c1666f69a50ff3cc02a6eee73709048b46a0243143bf89dde787 6ce85ae821e03fdd8bd07541eda8ea822ee62a21dc41c68bec58c5695d43fb44
	I0601 11:20:12.809504  276679 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0601 11:20:12.819061  276679 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0601 11:20:12.825913  276679 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5643 Jun  1 11:07 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5656 Jun  1 11:07 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2123 Jun  1 11:07 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5604 Jun  1 11:07 /etc/kubernetes/scheduler.conf
	
	I0601 11:20:12.825968  276679 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0601 11:20:10.219173  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:20:12.219371  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:20:13.243691  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:20:15.243798  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:20:12.832916  276679 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0601 11:20:12.839178  276679 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0601 11:20:12.845567  276679 kubeadm.go:166] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0601 11:20:12.845605  276679 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0601 11:20:12.851603  276679 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0601 11:20:12.857919  276679 kubeadm.go:166] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0601 11:20:12.857967  276679 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0601 11:20:12.864112  276679 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0601 11:20:12.870523  276679 kubeadm.go:703] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0601 11:20:12.870540  276679 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0601 11:20:12.912381  276679 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0601 11:20:13.433508  276679 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0601 11:20:13.566844  276679 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0601 11:20:13.617762  276679 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0601 11:20:13.686212  276679 api_server.go:51] waiting for apiserver process to appear ...
	I0601 11:20:13.686269  276679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:20:14.195273  276679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:20:14.695296  276679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:20:15.195457  276679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:20:15.695544  276679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:20:16.195542  276679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:20:16.695465  276679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:20:17.195333  276679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:20:17.694666  276679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:20:14.719337  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:20:17.218953  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:20:17.742741  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:20:20.244002  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:20:18.194692  276679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:20:18.694918  276679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:20:19.195623  276679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:20:19.695137  276679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:20:19.758656  276679 api_server.go:71] duration metric: took 6.072444993s to wait for apiserver process to appear ...
	I0601 11:20:19.758687  276679 api_server.go:87] waiting for apiserver healthz status ...
	I0601 11:20:19.758700  276679 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8444/healthz ...
	I0601 11:20:22.369047  276679 api_server.go:266] https://192.168.49.2:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0601 11:20:22.369078  276679 api_server.go:102] status: https://192.168.49.2:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0601 11:20:19.718920  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:20:21.719314  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:20:23.719804  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:20:22.869917  276679 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8444/healthz ...
	I0601 11:20:22.874561  276679 api_server.go:266] https://192.168.49.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0601 11:20:22.874589  276679 api_server.go:102] status: https://192.168.49.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0601 11:20:23.370203  276679 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8444/healthz ...
	I0601 11:20:23.375048  276679 api_server.go:266] https://192.168.49.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0601 11:20:23.375073  276679 api_server.go:102] status: https://192.168.49.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0601 11:20:23.869242  276679 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8444/healthz ...
	I0601 11:20:23.874012  276679 api_server.go:266] https://192.168.49.2:8444/healthz returned 200:
	ok
	I0601 11:20:23.879941  276679 api_server.go:140] control plane version: v1.23.6
	I0601 11:20:23.879963  276679 api_server.go:130] duration metric: took 4.121269797s to wait for apiserver health ...
	I0601 11:20:23.879972  276679 cni.go:95] Creating CNI manager for ""
	I0601 11:20:23.879977  276679 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0601 11:20:23.882052  276679 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0601 11:20:22.743507  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:20:25.242700  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:20:23.883460  276679 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0601 11:20:23.886921  276679 cni.go:189] applying CNI manifest using /var/lib/minikube/binaries/v1.23.6/kubectl ...
	I0601 11:20:23.886945  276679 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0601 11:20:23.899955  276679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0601 11:20:24.544438  276679 system_pods.go:43] waiting for kube-system pods to appear ...
	I0601 11:20:24.550979  276679 system_pods.go:59] 9 kube-system pods found
	I0601 11:20:24.551015  276679 system_pods.go:61] "coredns-64897985d-9gcj2" [28e98fca-a88b-422d-9f4b-797b18a8ff7a] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
	I0601 11:20:24.551025  276679 system_pods.go:61] "etcd-default-k8s-different-port-20220601110654-6708" [3005e651-1349-4d5e-b06f-e0fac3064ccf] Running
	I0601 11:20:24.551035  276679 system_pods.go:61] "kindnet-7fspq" [eefcd8e6-51e4-4d48-a420-93f4b47cf732] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0601 11:20:24.551042  276679 system_pods.go:61] "kube-apiserver-default-k8s-different-port-20220601110654-6708" [974fafdd-9176-4d97-acd7-9874d63b4987] Running
	I0601 11:20:24.551053  276679 system_pods.go:61] "kube-controller-manager-default-k8s-different-port-20220601110654-6708" [38b2c1a1-9a1a-4a1f-9fac-904e47d545be] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0601 11:20:24.551066  276679 system_pods.go:61] "kube-proxy-slzcl" [a1a6237f-6142-4e31-8bd4-72afd4f8a4c8] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0601 11:20:24.551083  276679 system_pods.go:61] "kube-scheduler-default-k8s-different-port-20220601110654-6708" [42ce6176-36e5-46bc-a443-19e4ca958785] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0601 11:20:24.551092  276679 system_pods.go:61] "metrics-server-b955d9d8-2k9wk" [fbc457b5-c359-4b84-abe5-d488874181f4] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
	I0601 11:20:24.551102  276679 system_pods.go:61] "storage-provisioner" [48086474-3417-47ff-970d-f7cf7806983b] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
	I0601 11:20:24.551112  276679 system_pods.go:74] duration metric: took 6.652373ms to wait for pod list to return data ...
	I0601 11:20:24.551126  276679 node_conditions.go:102] verifying NodePressure condition ...
	I0601 11:20:24.553819  276679 node_conditions.go:122] node storage ephemeral capacity is 304695084Ki
	I0601 11:20:24.553843  276679 node_conditions.go:123] node cpu capacity is 8
	I0601 11:20:24.553854  276679 node_conditions.go:105] duration metric: took 2.721044ms to run NodePressure ...
	I0601 11:20:24.553869  276679 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0601 11:20:24.680194  276679 kubeadm.go:762] waiting for restarted kubelet to initialise ...
	I0601 11:20:24.683686  276679 kubeadm.go:777] kubelet initialised
	I0601 11:20:24.683708  276679 kubeadm.go:778] duration metric: took 3.487172ms waiting for restarted kubelet to initialise ...
	I0601 11:20:24.683715  276679 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0601 11:20:24.689167  276679 pod_ready.go:78] waiting up to 4m0s for pod "coredns-64897985d-9gcj2" in "kube-system" namespace to be "Ready" ...
	I0601 11:20:26.694484  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:20:26.219205  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:20:28.219317  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:20:27.243486  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:20:29.742717  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:20:31.742800  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:20:28.695017  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:20:30.695110  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:20:32.695566  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:20:30.219646  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:20:32.719074  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:20:34.242643  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:20:36.243891  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:20:35.195305  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:20:37.197596  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:20:35.219473  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:20:37.719336  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:20:38.243963  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:20:40.743349  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:20:39.695270  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:20:42.195160  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:20:40.218932  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:20:42.719276  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:20:42.743398  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:20:45.243686  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:20:44.694661  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:20:46.695274  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:20:45.219350  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:20:47.719698  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:20:47.742813  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:20:50.244047  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:20:48.696514  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:20:51.195247  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:20:50.218967  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:20:52.219422  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:20:52.743394  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:20:54.743515  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:20:53.694370  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:20:55.694640  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:20:57.695171  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:20:54.719514  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:20:57.219033  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:20:57.242819  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:20:59.243739  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:20:59.739945  270029 pod_ready.go:81] duration metric: took 4m0.002166585s waiting for pod "coredns-64897985d-9dpfv" in "kube-system" namespace to be "Ready" ...
	E0601 11:20:59.739968  270029 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "coredns-64897985d-9dpfv" in "kube-system" namespace to be "Ready" (will not retry!)
	I0601 11:20:59.739995  270029 pod_ready.go:38] duration metric: took 4m0.008917217s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0601 11:20:59.740018  270029 kubeadm.go:630] restartCluster took 4m15.707393707s
	W0601 11:20:59.740131  270029 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0601 11:20:59.740156  270029 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I0601 11:21:01.430762  270029 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force": (1.690579833s)
	I0601 11:21:01.430838  270029 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0601 11:21:01.440364  270029 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0601 11:21:01.447145  270029 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0601 11:21:01.447194  270029 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0601 11:21:01.453852  270029 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0601 11:21:01.453891  270029 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0601 11:21:01.701224  270029 out.go:204]   - Generating certificates and keys ...
	I0601 11:21:00.194872  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:21:02.195437  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:20:59.219067  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:21:01.219719  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:21:03.719181  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:21:02.294583  270029 out.go:204]   - Booting up control plane ...
	I0601 11:21:04.694423  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:21:06.695087  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:21:05.719516  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:21:07.719966  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:21:09.195174  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:21:11.694583  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:21:10.218984  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:21:12.219075  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:21:14.337355  270029 out.go:204]   - Configuring RBAC rules ...
	I0601 11:21:14.750718  270029 cni.go:95] Creating CNI manager for ""
	I0601 11:21:14.750741  270029 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0601 11:21:14.752905  270029 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0601 11:21:14.754285  270029 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0601 11:21:14.758047  270029 cni.go:189] applying CNI manifest using /var/lib/minikube/binaries/v1.23.6/kubectl ...
	I0601 11:21:14.758065  270029 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0601 11:21:14.771201  270029 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0601 11:21:15.434277  270029 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0601 11:21:15.434380  270029 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:21:15.434381  270029 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl label nodes minikube.k8s.io/version=v1.26.0-beta.1 minikube.k8s.io/commit=4a356b2b7b41c6be3e1e342298908c27bb98ce92 minikube.k8s.io/name=embed-certs-20220601110327-6708 minikube.k8s.io/updated_at=2022_06_01T11_21_15_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:21:15.489119  270029 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:21:15.489208  270029 ops.go:34] apiserver oom_adj: -16
	I0601 11:21:16.079192  270029 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:21:16.579319  270029 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:21:14.194681  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:21:16.694557  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:21:14.219440  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:21:16.719363  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:21:17.079349  270029 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:21:17.579548  270029 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:21:18.079683  270029 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:21:18.579186  270029 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:21:19.079819  270029 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:21:19.579346  270029 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:21:20.079183  270029 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:21:20.579984  270029 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:21:21.079335  270029 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:21:21.579766  270029 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:21:18.694796  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:21:21.194627  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:21:19.218867  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:21:21.219185  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:21:23.719814  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:21:22.079321  270029 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:21:22.579993  270029 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:21:23.079856  270029 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:21:23.579743  270029 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:21:24.079256  270029 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:21:24.579276  270029 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:21:25.079828  270029 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:21:25.579763  270029 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:21:26.080068  270029 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:21:26.579388  270029 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:21:23.694527  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:21:25.694996  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:21:27.079269  270029 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:21:27.579729  270029 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:21:27.636171  270029 kubeadm.go:1045] duration metric: took 12.201851278s to wait for elevateKubeSystemPrivileges.
	I0601 11:21:27.636205  270029 kubeadm.go:397] StartCluster complete in 4m43.646757592s
	I0601 11:21:27.636227  270029 settings.go:142] acquiring lock: {Name:mk20a847233fa50399ab0a24280bffb8d8dbd41b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0601 11:21:27.636334  270029 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig
	I0601 11:21:27.637880  270029 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig: {Name:mkb0e16236e54fbc8651999f1dd70854c53de7db Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0601 11:21:28.157076  270029 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "embed-certs-20220601110327-6708" rescaled to 1
	I0601 11:21:28.157150  270029 start.go:208] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0601 11:21:28.157180  270029 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0601 11:21:28.159818  270029 out.go:177] * Verifying Kubernetes components...
	I0601 11:21:28.157185  270029 addons.go:415] enableAddons start: toEnable=map[dashboard:true metrics-server:true], additional=[]
	I0601 11:21:28.157406  270029 config.go:178] Loaded profile config "embed-certs-20220601110327-6708": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.6
	I0601 11:21:28.161484  270029 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0601 11:21:28.161496  270029 addons.go:65] Setting metrics-server=true in profile "embed-certs-20220601110327-6708"
	I0601 11:21:28.161511  270029 addons.go:153] Setting addon metrics-server=true in "embed-certs-20220601110327-6708"
	W0601 11:21:28.161523  270029 addons.go:165] addon metrics-server should already be in state true
	I0601 11:21:28.161537  270029 addons.go:65] Setting dashboard=true in profile "embed-certs-20220601110327-6708"
	I0601 11:21:28.161566  270029 addons.go:153] Setting addon dashboard=true in "embed-certs-20220601110327-6708"
	I0601 11:21:28.161573  270029 host.go:66] Checking if "embed-certs-20220601110327-6708" exists ...
	W0601 11:21:28.161579  270029 addons.go:165] addon dashboard should already be in state true
	I0601 11:21:28.161483  270029 addons.go:65] Setting storage-provisioner=true in profile "embed-certs-20220601110327-6708"
	I0601 11:21:28.161622  270029 addons.go:153] Setting addon storage-provisioner=true in "embed-certs-20220601110327-6708"
	W0601 11:21:28.161631  270029 addons.go:165] addon storage-provisioner should already be in state true
	I0601 11:21:28.161636  270029 host.go:66] Checking if "embed-certs-20220601110327-6708" exists ...
	I0601 11:21:28.161669  270029 host.go:66] Checking if "embed-certs-20220601110327-6708" exists ...
	I0601 11:21:28.161500  270029 addons.go:65] Setting default-storageclass=true in profile "embed-certs-20220601110327-6708"
	I0601 11:21:28.161709  270029 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-20220601110327-6708"
	I0601 11:21:28.161949  270029 cli_runner.go:164] Run: docker container inspect embed-certs-20220601110327-6708 --format={{.State.Status}}
	I0601 11:21:28.162094  270029 cli_runner.go:164] Run: docker container inspect embed-certs-20220601110327-6708 --format={{.State.Status}}
	I0601 11:21:28.162123  270029 cli_runner.go:164] Run: docker container inspect embed-certs-20220601110327-6708 --format={{.State.Status}}
	I0601 11:21:28.162229  270029 cli_runner.go:164] Run: docker container inspect embed-certs-20220601110327-6708 --format={{.State.Status}}
	I0601 11:21:28.209663  270029 out.go:177]   - Using image k8s.gcr.io/echoserver:1.4
	I0601 11:21:28.211523  270029 out.go:177]   - Using image kubernetesui/dashboard:v2.5.1
	I0601 11:21:28.213009  270029 addons.go:348] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0601 11:21:28.213030  270029 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0601 11:21:28.213079  270029 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220601110327-6708
	I0601 11:21:28.216922  270029 out.go:177]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I0601 11:21:28.218989  270029 addons.go:348] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0601 11:21:28.217201  270029 addons.go:153] Setting addon default-storageclass=true in "embed-certs-20220601110327-6708"
	W0601 11:21:28.219035  270029 addons.go:165] addon default-storageclass should already be in state true
	I0601 11:21:28.219075  270029 host.go:66] Checking if "embed-certs-20220601110327-6708" exists ...
	I0601 11:21:28.219579  270029 cli_runner.go:164] Run: docker container inspect embed-certs-20220601110327-6708 --format={{.State.Status}}
	I0601 11:21:28.219012  270029 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0601 11:21:28.219781  270029 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220601110327-6708
	I0601 11:21:28.236451  270029 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0601 11:21:26.218905  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:21:28.219209  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:21:28.238138  270029 addons.go:348] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0601 11:21:28.238163  270029 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0601 11:21:28.238217  270029 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220601110327-6708
	I0601 11:21:28.246850  270029 node_ready.go:35] waiting up to 6m0s for node "embed-certs-20220601110327-6708" to be "Ready" ...
	I0601 11:21:28.246885  270029 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0601 11:21:28.273680  270029 addons.go:348] installing /etc/kubernetes/addons/storageclass.yaml
	I0601 11:21:28.273707  270029 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0601 11:21:28.273761  270029 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220601110327-6708
	I0601 11:21:28.278846  270029 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49437 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/embed-certs-20220601110327-6708/id_rsa Username:docker}
	I0601 11:21:28.279320  270029 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49437 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/embed-certs-20220601110327-6708/id_rsa Username:docker}
	I0601 11:21:28.286384  270029 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49437 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/embed-certs-20220601110327-6708/id_rsa Username:docker}
	I0601 11:21:28.321729  270029 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49437 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/embed-certs-20220601110327-6708/id_rsa Username:docker}
	I0601 11:21:28.455756  270029 addons.go:348] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0601 11:21:28.455785  270029 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1849 bytes)
	I0601 11:21:28.466348  270029 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0601 11:21:28.469026  270029 addons.go:348] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0601 11:21:28.469067  270029 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0601 11:21:28.469486  270029 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0601 11:21:28.478076  270029 addons.go:348] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0601 11:21:28.478099  270029 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0601 11:21:28.487008  270029 addons.go:348] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0601 11:21:28.487036  270029 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0601 11:21:28.573106  270029 addons.go:348] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0601 11:21:28.573135  270029 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0601 11:21:28.574698  270029 start.go:806] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS
	I0601 11:21:28.577019  270029 addons.go:348] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0601 11:21:28.577042  270029 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0601 11:21:28.653936  270029 addons.go:348] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0601 11:21:28.653967  270029 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4196 bytes)
	I0601 11:21:28.658482  270029 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0601 11:21:28.671762  270029 addons.go:348] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0601 11:21:28.671808  270029 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0601 11:21:28.758424  270029 addons.go:348] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0601 11:21:28.758516  270029 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0601 11:21:28.776703  270029 addons.go:348] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0601 11:21:28.776735  270029 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0601 11:21:28.794636  270029 addons.go:348] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0601 11:21:28.794670  270029 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0601 11:21:28.959418  270029 addons.go:348] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0601 11:21:28.959449  270029 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0601 11:21:28.976465  270029 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0601 11:21:29.354605  270029 addons.go:386] Verifying addon metrics-server=true in "embed-certs-20220601110327-6708"
	I0601 11:21:29.699561  270029 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server, dashboard
	I0601 11:21:29.700807  270029 addons.go:417] enableAddons completed in 1.543631535s
	I0601 11:21:30.260215  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:21:28.196140  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:21:30.694688  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:21:32.695236  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:21:30.219534  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:21:32.219685  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:21:32.260412  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:21:34.760173  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:21:36.760442  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:21:35.195034  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:21:37.195304  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:21:34.718805  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:21:36.719108  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:21:38.760533  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:21:40.761060  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:21:39.694703  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:21:42.195994  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:21:39.219402  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:21:41.718982  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:21:43.719227  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:21:43.259684  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:21:45.260363  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:21:45.719329  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:21:47.719480  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:21:47.721505  254820 node_ready.go:38] duration metric: took 4m0.008123732s waiting for node "old-k8s-version-20220601105850-6708" to be "Ready" ...
	I0601 11:21:47.723918  254820 out.go:177] 
	W0601 11:21:47.725406  254820 out.go:239] X Exiting due to GUEST_START: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: timed out waiting for the condition
	W0601 11:21:47.725423  254820 out.go:239] * 
	W0601 11:21:47.726098  254820 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0601 11:21:47.728001  254820 out.go:177] 
	I0601 11:21:44.695306  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:21:47.194624  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:21:47.760960  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:21:50.260784  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:21:49.195368  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:21:51.694946  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:21:52.760281  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:21:55.259912  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:21:54.194912  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:21:56.195652  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:21:57.259956  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:21:59.759755  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:22:01.759853  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:21:58.694995  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:22:01.194431  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:22:03.760721  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:22:06.260069  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:22:03.195297  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:22:05.694312  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:22:07.695082  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:22:08.260739  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:22:10.760237  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:22:10.194760  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:22:12.194885  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:22:13.259813  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:22:15.260153  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:22:14.195226  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:22:16.694528  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:22:17.260859  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:22:19.759997  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:22:21.760654  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:22:18.695235  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:22:21.194694  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:22:24.260433  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:22:26.760129  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:22:23.197530  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:22:25.695229  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:22:28.760717  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:22:31.260368  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:22:28.194771  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:22:30.195026  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:22:32.694696  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:22:33.760112  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:22:35.760758  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:22:34.694930  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:22:36.695375  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:22:38.260723  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:22:40.760393  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:22:39.194795  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:22:41.694750  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:22:43.259823  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:22:45.260551  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:22:44.195389  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:22:46.695489  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:22:47.760311  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:22:49.760404  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:22:49.194395  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:22:51.195245  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:22:52.260594  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:22:54.760044  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:22:56.760073  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:22:53.195327  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:22:55.694893  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:22:58.760157  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:23:01.260267  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:22:58.194547  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:23:00.694762  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:23:03.260561  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:23:05.260780  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:23:03.195176  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:23:05.694698  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:23:07.695208  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:23:07.760513  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:23:10.260326  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:23:10.195039  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:23:12.695240  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:23:12.260674  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:23:14.260918  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:23:16.760064  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:23:15.195155  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:23:17.195241  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:23:18.760686  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:23:21.260676  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:23:19.694620  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:23:21.694667  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:23:23.760024  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:23:26.259746  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:23:24.194510  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:23:26.194546  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:23:28.260714  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:23:30.760541  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:23:28.194917  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:23:30.694766  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:23:33.260035  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:23:35.261060  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:23:33.195328  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:23:35.694682  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:23:37.695340  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:23:37.760144  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:23:40.260334  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:23:40.194751  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:23:42.194853  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:23:42.759808  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:23:44.759997  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:23:46.760285  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:23:44.695010  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:23:46.695526  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:23:48.760374  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:23:51.260999  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:23:49.194307  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:23:51.195053  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:23:53.760587  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:23:56.260172  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:23:53.195339  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:23:55.695153  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:23:58.759799  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:24:00.760631  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:23:58.194738  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:24:00.195407  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:24:02.695048  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:24:03.260687  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:24:05.260722  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:24:04.695337  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:24:07.194665  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:24:07.760567  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:24:10.260596  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:24:09.195069  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:24:11.694328  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:24:12.260967  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:24:14.759793  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:24:16.760292  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:24:14.194996  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:24:16.694542  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:24:18.760531  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:24:20.760689  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:24:18.694668  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:24:20.695051  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:24:23.195952  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:24:24.691928  276679 pod_ready.go:81] duration metric: took 4m0.002724634s waiting for pod "coredns-64897985d-9gcj2" in "kube-system" namespace to be "Ready" ...
	E0601 11:24:24.691955  276679 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "coredns-64897985d-9gcj2" in "kube-system" namespace to be "Ready" (will not retry!)
	I0601 11:24:24.691981  276679 pod_ready.go:38] duration metric: took 4m0.008258762s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0601 11:24:24.692005  276679 kubeadm.go:630] restartCluster took 4m14.973349857s
	W0601 11:24:24.692130  276679 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0601 11:24:24.692159  276679 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I0601 11:24:26.286416  276679 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force": (1.594228976s)
	I0601 11:24:26.286489  276679 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0601 11:24:26.296314  276679 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0601 11:24:26.303059  276679 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0601 11:24:26.303116  276679 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0601 11:24:26.309917  276679 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0601 11:24:26.309957  276679 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0601 11:24:22.761011  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:24:25.261206  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:24:26.556270  276679 out.go:204]   - Generating certificates and keys ...
	I0601 11:24:27.302083  276679 out.go:204]   - Booting up control plane ...
	I0601 11:24:27.261441  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:24:29.759885  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:24:32.260145  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:24:34.260990  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:24:36.760710  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:24:38.840585  276679 out.go:204]   - Configuring RBAC rules ...
	I0601 11:24:39.253770  276679 cni.go:95] Creating CNI manager for ""
	I0601 11:24:39.253791  276679 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0601 11:24:39.255739  276679 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0601 11:24:39.259837  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:24:41.260124  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:24:39.257207  276679 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0601 11:24:39.261207  276679 cni.go:189] applying CNI manifest using /var/lib/minikube/binaries/v1.23.6/kubectl ...
	I0601 11:24:39.261228  276679 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0601 11:24:39.273744  276679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0601 11:24:39.861493  276679 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0601 11:24:39.861573  276679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:24:39.861574  276679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl label nodes minikube.k8s.io/version=v1.26.0-beta.1 minikube.k8s.io/commit=4a356b2b7b41c6be3e1e342298908c27bb98ce92 minikube.k8s.io/name=default-k8s-different-port-20220601110654-6708 minikube.k8s.io/updated_at=2022_06_01T11_24_39_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:24:39.914842  276679 ops.go:34] apiserver oom_adj: -16
	I0601 11:24:39.914913  276679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:24:40.498901  276679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:24:40.998931  276679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:24:41.499031  276679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:24:41.998593  276679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:24:42.499160  276679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:24:43.260473  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:24:45.760870  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:24:42.998966  276679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:24:43.498638  276679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:24:43.998319  276679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:24:44.498531  276679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:24:44.998678  276679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:24:45.499193  276679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:24:45.998418  276679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:24:46.498985  276679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:24:46.998941  276679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:24:47.498945  276679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:24:48.260450  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:24:50.260933  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:24:47.999272  276679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:24:48.498439  276679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:24:48.999292  276679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:24:49.499272  276679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:24:49.998339  276679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:24:50.498332  276679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:24:50.999106  276679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:24:51.499296  276679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:24:51.998980  276679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:24:52.498623  276679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:24:52.998371  276679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:24:53.498515  276679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:24:53.594790  276679 kubeadm.go:1045] duration metric: took 13.733266896s to wait for elevateKubeSystemPrivileges.
	I0601 11:24:53.594820  276679 kubeadm.go:397] StartCluster complete in 4m43.917251881s
	I0601 11:24:53.594841  276679 settings.go:142] acquiring lock: {Name:mk20a847233fa50399ab0a24280bffb8d8dbd41b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0601 11:24:53.594938  276679 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig
	I0601 11:24:53.596907  276679 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig: {Name:mkb0e16236e54fbc8651999f1dd70854c53de7db Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0601 11:24:54.111475  276679 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "default-k8s-different-port-20220601110654-6708" rescaled to 1
	I0601 11:24:54.111547  276679 start.go:208] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8444 KubernetesVersion:v1.23.6 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0601 11:24:54.113711  276679 out.go:177] * Verifying Kubernetes components...
	I0601 11:24:54.111604  276679 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0601 11:24:54.111644  276679 addons.go:415] enableAddons start: toEnable=map[dashboard:true metrics-server:true], additional=[]
	I0601 11:24:54.111802  276679 config.go:178] Loaded profile config "default-k8s-different-port-20220601110654-6708": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.6
	I0601 11:24:54.115020  276679 addons.go:65] Setting storage-provisioner=true in profile "default-k8s-different-port-20220601110654-6708"
	I0601 11:24:54.115035  276679 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0601 11:24:54.115035  276679 addons.go:65] Setting metrics-server=true in profile "default-k8s-different-port-20220601110654-6708"
	I0601 11:24:54.115048  276679 addons.go:153] Setting addon storage-provisioner=true in "default-k8s-different-port-20220601110654-6708"
	I0601 11:24:54.115055  276679 addons.go:153] Setting addon metrics-server=true in "default-k8s-different-port-20220601110654-6708"
	W0601 11:24:54.115057  276679 addons.go:165] addon storage-provisioner should already be in state true
	W0601 11:24:54.115064  276679 addons.go:165] addon metrics-server should already be in state true
	I0601 11:24:54.115034  276679 addons.go:65] Setting default-storageclass=true in profile "default-k8s-different-port-20220601110654-6708"
	I0601 11:24:54.115103  276679 host.go:66] Checking if "default-k8s-different-port-20220601110654-6708" exists ...
	I0601 11:24:54.115109  276679 host.go:66] Checking if "default-k8s-different-port-20220601110654-6708" exists ...
	I0601 11:24:54.115112  276679 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-different-port-20220601110654-6708"
	I0601 11:24:54.115037  276679 addons.go:65] Setting dashboard=true in profile "default-k8s-different-port-20220601110654-6708"
	I0601 11:24:54.115134  276679 addons.go:153] Setting addon dashboard=true in "default-k8s-different-port-20220601110654-6708"
	W0601 11:24:54.115144  276679 addons.go:165] addon dashboard should already be in state true
	I0601 11:24:54.115176  276679 host.go:66] Checking if "default-k8s-different-port-20220601110654-6708" exists ...
	I0601 11:24:54.115416  276679 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220601110654-6708 --format={{.State.Status}}
	I0601 11:24:54.115596  276679 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220601110654-6708 --format={{.State.Status}}
	I0601 11:24:54.115611  276679 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220601110654-6708 --format={{.State.Status}}
	I0601 11:24:54.115615  276679 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220601110654-6708 --format={{.State.Status}}
	I0601 11:24:54.129176  276679 node_ready.go:35] waiting up to 6m0s for node "default-k8s-different-port-20220601110654-6708" to be "Ready" ...
	I0601 11:24:54.168194  276679 out.go:177]   - Using image kubernetesui/dashboard:v2.5.1
	I0601 11:24:54.169714  276679 out.go:177]   - Using image k8s.gcr.io/echoserver:1.4
	I0601 11:24:54.171144  276679 addons.go:348] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0601 11:24:54.170891  276679 addons.go:153] Setting addon default-storageclass=true in "default-k8s-different-port-20220601110654-6708"
	W0601 11:24:54.171181  276679 addons.go:165] addon default-storageclass should already be in state true
	I0601 11:24:54.171211  276679 host.go:66] Checking if "default-k8s-different-port-20220601110654-6708" exists ...
	I0601 11:24:54.171167  276679 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0601 11:24:54.171329  276679 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220601110654-6708
	I0601 11:24:54.171684  276679 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220601110654-6708 --format={{.State.Status}}
	I0601 11:24:54.176157  276679 out.go:177]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I0601 11:24:54.177770  276679 addons.go:348] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0601 11:24:54.177796  276679 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0601 11:24:54.179131  276679 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0601 11:24:54.177859  276679 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220601110654-6708
	I0601 11:24:54.180787  276679 addons.go:348] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0601 11:24:54.180809  276679 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0601 11:24:54.180855  276679 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220601110654-6708
	I0601 11:24:54.233206  276679 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49442 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/default-k8s-different-port-20220601110654-6708/id_rsa Username:docker}
	I0601 11:24:54.240234  276679 addons.go:348] installing /etc/kubernetes/addons/storageclass.yaml
	I0601 11:24:54.240263  276679 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0601 11:24:54.240311  276679 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220601110654-6708
	I0601 11:24:54.240743  276679 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49442 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/default-k8s-different-port-20220601110654-6708/id_rsa Username:docker}
	I0601 11:24:54.242497  276679 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49442 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/default-k8s-different-port-20220601110654-6708/id_rsa Username:docker}
	I0601 11:24:54.255476  276679 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0601 11:24:54.289597  276679 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49442 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/default-k8s-different-port-20220601110654-6708/id_rsa Username:docker}
	I0601 11:24:54.510589  276679 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0601 11:24:54.510747  276679 addons.go:348] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0601 11:24:54.510770  276679 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1849 bytes)
	I0601 11:24:54.556919  276679 addons.go:348] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0601 11:24:54.556950  276679 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0601 11:24:54.566012  276679 addons.go:348] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0601 11:24:54.566042  276679 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0601 11:24:54.569528  276679 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0601 11:24:54.576575  276679 addons.go:348] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0601 11:24:54.576625  276679 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0601 11:24:54.654525  276679 addons.go:348] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0601 11:24:54.654551  276679 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0601 11:24:54.655296  276679 addons.go:348] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0601 11:24:54.655319  276679 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0601 11:24:54.661290  276679 start.go:806] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS
	I0601 11:24:54.671592  276679 addons.go:348] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0601 11:24:54.671621  276679 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4196 bytes)
	I0601 11:24:54.673696  276679 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0601 11:24:54.687107  276679 addons.go:348] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0601 11:24:54.687133  276679 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0601 11:24:54.768961  276679 addons.go:348] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0601 11:24:54.768989  276679 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0601 11:24:54.854363  276679 addons.go:348] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0601 11:24:54.854399  276679 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0601 11:24:54.870735  276679 addons.go:348] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0601 11:24:54.870762  276679 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0601 11:24:54.888031  276679 addons.go:348] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0601 11:24:54.888063  276679 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0601 11:24:54.967082  276679 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0601 11:24:55.273650  276679 addons.go:386] Verifying addon metrics-server=true in "default-k8s-different-port-20220601110654-6708"
	I0601 11:24:55.661065  276679 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I0601 11:24:52.261071  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:24:54.261578  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:24:56.760078  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:24:55.662561  276679 addons.go:417] enableAddons completed in 1.550935677s
	I0601 11:24:56.136034  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:24:58.760245  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:25:00.760344  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:24:58.136131  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:25:00.136759  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:25:02.636409  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:25:03.260144  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:25:05.260531  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:25:05.136779  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:25:07.635969  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:25:07.760027  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:25:09.760904  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:25:10.136336  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:25:12.636564  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:25:12.260100  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:25:14.759992  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:25:16.760260  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:25:14.636694  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:25:17.137058  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:25:19.260136  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:25:21.260700  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:25:19.636331  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:25:22.136010  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:25:23.760875  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:25:26.261082  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:25:24.136501  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:25:26.636646  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:25:28.263320  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:25:28.263343  270029 node_ready.go:38] duration metric: took 4m0.016466534s waiting for node "embed-certs-20220601110327-6708" to be "Ready" ...
	I0601 11:25:28.265930  270029 out.go:177] 
	W0601 11:25:28.267524  270029 out.go:239] X Exiting due to GUEST_START: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: timed out waiting for the condition
	W0601 11:25:28.267549  270029 out.go:239] * 
	W0601 11:25:28.268404  270029 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0601 11:25:28.269962  270029 out.go:177] 
	I0601 11:25:28.637161  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:25:31.135894  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:25:33.136655  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:25:35.635923  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:25:37.636131  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:25:39.636319  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:25:42.136004  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:25:44.136847  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:25:46.636704  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:25:49.136203  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:25:51.136808  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:25:53.636402  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:25:56.135580  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:25:58.135934  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:26:00.136698  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:26:02.136807  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:26:04.636360  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:26:07.136003  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:26:09.136403  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:26:11.636023  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:26:13.636284  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:26:16.136059  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:26:18.635976  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:26:20.636471  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:26:23.136420  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:26:25.635898  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:26:27.636092  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:26:29.636223  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:26:32.135814  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:26:34.136208  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:26:36.136320  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:26:38.635965  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:26:41.136884  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:26:43.636083  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:26:46.136237  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:26:48.635722  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:26:51.135780  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:26:53.136057  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:26:55.136925  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:26:57.636578  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:27:00.135989  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:27:02.136086  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:27:04.136153  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:27:06.635746  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:27:08.636054  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:27:10.636582  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:27:13.136118  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:27:15.137042  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:27:17.636192  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:27:20.136181  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:27:22.136256  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:27:24.136756  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:27:26.636114  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:27:28.636414  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:27:31.136248  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:27:33.136847  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:27:35.635813  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:27:37.636126  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:27:39.636375  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:27:42.136175  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:27:44.636682  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:27:47.135843  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:27:49.136252  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:27:51.137073  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:27:53.636035  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:27:55.636279  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:27:58.136943  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:28:00.635664  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:28:02.636502  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:28:04.638145  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:28:07.136842  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:28:09.636372  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:28:12.136048  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:28:14.136569  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:28:16.635705  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:28:18.636532  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:28:21.136177  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:28:23.636753  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:28:26.136524  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:28:28.635691  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:28:30.636561  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:28:33.136478  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:28:35.636196  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:28:38.137078  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:28:40.636164  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:28:42.636749  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:28:45.136427  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:28:47.636180  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:28:49.636861  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:28:52.136563  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:28:54.136714  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:28:54.138823  276679 node_ready.go:38] duration metric: took 4m0.0096115s waiting for node "default-k8s-different-port-20220601110654-6708" to be "Ready" ...
	I0601 11:28:54.141397  276679 out.go:177] 
	W0601 11:28:54.143025  276679 out.go:239] X Exiting due to GUEST_START: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: timed out waiting for the condition
	W0601 11:28:54.143041  276679 out.go:239] * 
	W0601 11:28:54.143750  276679 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0601 11:28:54.145729  276679 out.go:177] 
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID
	7c21d51412189       6de166512aa22       14 seconds ago      Exited              kindnet-cni               5                   df3be3bbc5f79
	2fb746cc75b1d       4c03754524064       4 minutes ago       Running             kube-proxy                0                   e819a7c456c7c
	dd66fe479b71f       595f327f224a4       4 minutes ago       Running             kube-scheduler            2                   c74ba4ef859aa
	7d3ead15d6ba2       25f8c7f3da61c       4 minutes ago       Running             etcd                      2                   d5f8156c990b4
	d21e78271b81a       df7b72818ad2e       4 minutes ago       Running             kube-controller-manager   2                   ee67c136c178d
	a01c09dc992a3       8fa62c12256df       4 minutes ago       Running             kube-apiserver            2                   36abb2c184cf8
	
	* 
	* ==> containerd <==
	* -- Logs begin at Wed 2022-06-01 11:19:53 UTC, end at Wed 2022-06-01 11:28:55 UTC. --
	Jun 01 11:26:13 default-k8s-different-port-20220601110654-6708 containerd[390]: time="2022-06-01T11:26:13.492281445Z" level=warning msg="cleaning up after shim disconnected" id=165ba3016ba12414e2d9940e075fd871e222b6429d5ccfe2eb744ca211e6e1a6 namespace=k8s.io
	Jun 01 11:26:13 default-k8s-different-port-20220601110654-6708 containerd[390]: time="2022-06-01T11:26:13.492299474Z" level=info msg="cleaning up dead shim"
	Jun 01 11:26:13 default-k8s-different-port-20220601110654-6708 containerd[390]: time="2022-06-01T11:26:13.500809620Z" level=warning msg="cleanup warnings time=\"2022-06-01T11:26:13Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4192 runtime=io.containerd.runc.v2\n"
	Jun 01 11:26:14 default-k8s-different-port-20220601110654-6708 containerd[390]: time="2022-06-01T11:26:14.418695031Z" level=info msg="RemoveContainer for \"5a9539c011104226afa5545906136e8f9b03e0b6301ec9c2b2ad26e3a7586b3f\""
	Jun 01 11:26:14 default-k8s-different-port-20220601110654-6708 containerd[390]: time="2022-06-01T11:26:14.423192568Z" level=info msg="RemoveContainer for \"5a9539c011104226afa5545906136e8f9b03e0b6301ec9c2b2ad26e3a7586b3f\" returns successfully"
	Jun 01 11:26:58 default-k8s-different-port-20220601110654-6708 containerd[390]: time="2022-06-01T11:26:58.181880235Z" level=info msg="CreateContainer within sandbox \"df3be3bbc5f79542be5bca9d7d7637b0cac5b8ac05520962d10fb8e4166ec4b9\" for container &ContainerMetadata{Name:kindnet-cni,Attempt:4,}"
	Jun 01 11:26:58 default-k8s-different-port-20220601110654-6708 containerd[390]: time="2022-06-01T11:26:58.196889095Z" level=info msg="CreateContainer within sandbox \"df3be3bbc5f79542be5bca9d7d7637b0cac5b8ac05520962d10fb8e4166ec4b9\" for &ContainerMetadata{Name:kindnet-cni,Attempt:4,} returns container id \"52ffc7dba8d4e7d84f0ad6c1fb023e6358ed37847f4efb6b4426796aa9cc6f30\""
	Jun 01 11:26:58 default-k8s-different-port-20220601110654-6708 containerd[390]: time="2022-06-01T11:26:58.197361098Z" level=info msg="StartContainer for \"52ffc7dba8d4e7d84f0ad6c1fb023e6358ed37847f4efb6b4426796aa9cc6f30\""
	Jun 01 11:26:58 default-k8s-different-port-20220601110654-6708 containerd[390]: time="2022-06-01T11:26:58.357522806Z" level=info msg="StartContainer for \"52ffc7dba8d4e7d84f0ad6c1fb023e6358ed37847f4efb6b4426796aa9cc6f30\" returns successfully"
	Jun 01 11:27:08 default-k8s-different-port-20220601110654-6708 containerd[390]: time="2022-06-01T11:27:08.588973619Z" level=info msg="shim disconnected" id=52ffc7dba8d4e7d84f0ad6c1fb023e6358ed37847f4efb6b4426796aa9cc6f30
	Jun 01 11:27:08 default-k8s-different-port-20220601110654-6708 containerd[390]: time="2022-06-01T11:27:08.589030952Z" level=warning msg="cleaning up after shim disconnected" id=52ffc7dba8d4e7d84f0ad6c1fb023e6358ed37847f4efb6b4426796aa9cc6f30 namespace=k8s.io
	Jun 01 11:27:08 default-k8s-different-port-20220601110654-6708 containerd[390]: time="2022-06-01T11:27:08.589054923Z" level=info msg="cleaning up dead shim"
	Jun 01 11:27:08 default-k8s-different-port-20220601110654-6708 containerd[390]: time="2022-06-01T11:27:08.598709682Z" level=warning msg="cleanup warnings time=\"2022-06-01T11:27:08Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4269 runtime=io.containerd.runc.v2\n"
	Jun 01 11:27:09 default-k8s-different-port-20220601110654-6708 containerd[390]: time="2022-06-01T11:27:09.517083886Z" level=info msg="RemoveContainer for \"165ba3016ba12414e2d9940e075fd871e222b6429d5ccfe2eb744ca211e6e1a6\""
	Jun 01 11:27:09 default-k8s-different-port-20220601110654-6708 containerd[390]: time="2022-06-01T11:27:09.521281988Z" level=info msg="RemoveContainer for \"165ba3016ba12414e2d9940e075fd871e222b6429d5ccfe2eb744ca211e6e1a6\" returns successfully"
	Jun 01 11:28:41 default-k8s-different-port-20220601110654-6708 containerd[390]: time="2022-06-01T11:28:41.180809515Z" level=info msg="CreateContainer within sandbox \"df3be3bbc5f79542be5bca9d7d7637b0cac5b8ac05520962d10fb8e4166ec4b9\" for container &ContainerMetadata{Name:kindnet-cni,Attempt:5,}"
	Jun 01 11:28:41 default-k8s-different-port-20220601110654-6708 containerd[390]: time="2022-06-01T11:28:41.193366594Z" level=info msg="CreateContainer within sandbox \"df3be3bbc5f79542be5bca9d7d7637b0cac5b8ac05520962d10fb8e4166ec4b9\" for &ContainerMetadata{Name:kindnet-cni,Attempt:5,} returns container id \"7c21d514121895b0dbb3e8edace6db1999e4a2588c3d100ca15a3d28276ae8a3\""
	Jun 01 11:28:41 default-k8s-different-port-20220601110654-6708 containerd[390]: time="2022-06-01T11:28:41.193899651Z" level=info msg="StartContainer for \"7c21d514121895b0dbb3e8edace6db1999e4a2588c3d100ca15a3d28276ae8a3\""
	Jun 01 11:28:41 default-k8s-different-port-20220601110654-6708 containerd[390]: time="2022-06-01T11:28:41.268103128Z" level=info msg="StartContainer for \"7c21d514121895b0dbb3e8edace6db1999e4a2588c3d100ca15a3d28276ae8a3\" returns successfully"
	Jun 01 11:28:51 default-k8s-different-port-20220601110654-6708 containerd[390]: time="2022-06-01T11:28:51.490479023Z" level=info msg="shim disconnected" id=7c21d514121895b0dbb3e8edace6db1999e4a2588c3d100ca15a3d28276ae8a3
	Jun 01 11:28:51 default-k8s-different-port-20220601110654-6708 containerd[390]: time="2022-06-01T11:28:51.490544193Z" level=warning msg="cleaning up after shim disconnected" id=7c21d514121895b0dbb3e8edace6db1999e4a2588c3d100ca15a3d28276ae8a3 namespace=k8s.io
	Jun 01 11:28:51 default-k8s-different-port-20220601110654-6708 containerd[390]: time="2022-06-01T11:28:51.490563854Z" level=info msg="cleaning up dead shim"
	Jun 01 11:28:51 default-k8s-different-port-20220601110654-6708 containerd[390]: time="2022-06-01T11:28:51.499331935Z" level=warning msg="cleanup warnings time=\"2022-06-01T11:28:51Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4351 runtime=io.containerd.runc.v2\n"
	Jun 01 11:28:51 default-k8s-different-port-20220601110654-6708 containerd[390]: time="2022-06-01T11:28:51.693814558Z" level=info msg="RemoveContainer for \"52ffc7dba8d4e7d84f0ad6c1fb023e6358ed37847f4efb6b4426796aa9cc6f30\""
	Jun 01 11:28:51 default-k8s-different-port-20220601110654-6708 containerd[390]: time="2022-06-01T11:28:51.698952344Z" level=info msg="RemoveContainer for \"52ffc7dba8d4e7d84f0ad6c1fb023e6358ed37847f4efb6b4426796aa9cc6f30\" returns successfully"
	
	* 
	* ==> describe nodes <==
	* Name:               default-k8s-different-port-20220601110654-6708
	Roles:              control-plane,master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-different-port-20220601110654-6708
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=4a356b2b7b41c6be3e1e342298908c27bb98ce92
	                    minikube.k8s.io/name=default-k8s-different-port-20220601110654-6708
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2022_06_01T11_24_39_0700
	                    minikube.k8s.io/version=v1.26.0-beta.1
	                    node-role.kubernetes.io/control-plane=
	                    node-role.kubernetes.io/master=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 01 Jun 2022 11:24:36 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-different-port-20220601110654-6708
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 01 Jun 2022 11:28:49 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 01 Jun 2022 11:24:52 +0000   Wed, 01 Jun 2022 11:24:34 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 01 Jun 2022 11:24:52 +0000   Wed, 01 Jun 2022 11:24:34 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 01 Jun 2022 11:24:52 +0000   Wed, 01 Jun 2022 11:24:34 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Wed, 01 Jun 2022 11:24:52 +0000   Wed, 01 Jun 2022 11:24:34 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    default-k8s-different-port-20220601110654-6708
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304695084Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32873824Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304695084Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32873824Ki
	  pods:               110
	System Info:
	  Machine ID:                 e0d7477b601740b2a7c32c13851e505c
	  System UUID:                c3073178-0849-48bb-88da-ba72ab8c4ba0
	  Boot ID:                    6ec46557-2d76-4d83-8353-3ee04fd961c4
	  Kernel Version:             5.13.0-1027-gcp
	  OS Image:                   Ubuntu 20.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.6.4
	  Kubelet Version:            v1.23.6
	  Kube-Proxy Version:         v1.23.6
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                                                      CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                                      ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-default-k8s-different-port-20220601110654-6708                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (0%!)(MISSING)       0 (0%!)(MISSING)         4m11s
	  kube-system                 kindnet-bzkn8                                                             100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      4m2s
	  kube-system                 kube-apiserver-default-k8s-different-port-20220601110654-6708             250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m11s
	  kube-system                 kube-controller-manager-default-k8s-different-port-20220601110654-6708    200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m11s
	  kube-system                 kube-proxy-nfvrv                                                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m2s
	  kube-system                 kube-scheduler-default-k8s-different-port-20220601110654-6708             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m11s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%!)(MISSING)   100m (1%!)(MISSING)
	  memory             150Mi (0%!)(MISSING)  50Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From        Message
	  ----    ------                   ----   ----        -------
	  Normal  Starting                 4m1s   kube-proxy  
	  Normal  Starting                 4m11s  kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m11s  kubelet     Node default-k8s-different-port-20220601110654-6708 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m11s  kubelet     Node default-k8s-different-port-20220601110654-6708 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m11s  kubelet     Node default-k8s-different-port-20220601110654-6708 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m11s  kubelet     Updated Node Allocatable limit across pods
	
	* 
	* ==> dmesg <==
	* [Jun 1 10:57] IPv4: martian source 10.244.0.2 from 10.244.0.2, on dev bridge
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff ce 23 50 3b c9 fd 08 06
	[  +0.595787] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ce 23 50 3b c9 fd 08 06
	[  +0.586201] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 66 8a d2 ee 2a 9a 08 06
	[Jun 1 10:58] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 9a c7 83 3b 8c 1d 08 06
	[  +0.000302] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 66 8a d2 ee 2a 9a 08 06
	[ +12.725641] IPv4: martian source 10.244.0.2 from 10.244.0.2, on dev bridge
	[  +0.000013] ll header: 00000000: ff ff ff ff ff ff 3a 82 da d0 8c 8d 08 06
	[  +0.906380] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 3a 82 da d0 8c 8d 08 06
	[  +0.302806] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff aa d1 05 1a ef 66 08 06
	[ +13.606547] IPv4: martian source 10.85.0.2 from 10.85.0.2, on dev cni0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 7e ad f3 21 27 5b 08 06
	[  +8.626375] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff f6 c1 ef be 01 dd 08 06
	[  +0.000348] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff aa d1 05 1a ef 66 08 06
	[  +8.278909] IPv4: martian source 10.85.0.2 from 10.85.0.2, on dev cni0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 72 fb cd eb 09 11 08 06
	[Jun 1 11:01] IPv4: martian destination 127.0.0.11 from 10.244.0.2, dev vethb04b9442
	
	* 
	* ==> etcd [7d3ead15d6ba2e4b8c432e1081c87bd87496d8d69e3abb714f29c65bba94ebdf] <==
	* {"level":"info","ts":"2022-06-01T11:24:33.672Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc switched to configuration voters=(12593026477526642892)"}
	{"level":"info","ts":"2022-06-01T11:24:33.672Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","added-peer-id":"aec36adc501070cc","added-peer-peer-urls":["https://192.168.49.2:2380"]}
	{"level":"info","ts":"2022-06-01T11:24:33.674Z","caller":"embed/etcd.go:687","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2022-06-01T11:24:33.674Z","caller":"embed/etcd.go:580","msg":"serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2022-06-01T11:24:33.674Z","caller":"embed/etcd.go:552","msg":"cmux::serve","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2022-06-01T11:24:33.674Z","caller":"embed/etcd.go:276","msg":"now serving peer/client/metrics","local-member-id":"aec36adc501070cc","initial-advertise-peer-urls":["https://192.168.49.2:2380"],"listen-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.49.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2022-06-01T11:24:33.674Z","caller":"embed/etcd.go:762","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2022-06-01T11:24:34.464Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 1"}
	{"level":"info","ts":"2022-06-01T11:24:34.464Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 1"}
	{"level":"info","ts":"2022-06-01T11:24:34.464Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 1"}
	{"level":"info","ts":"2022-06-01T11:24:34.465Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 2"}
	{"level":"info","ts":"2022-06-01T11:24:34.465Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2022-06-01T11:24:34.465Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 2"}
	{"level":"info","ts":"2022-06-01T11:24:34.465Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"}
	{"level":"info","ts":"2022-06-01T11:24:34.465Z","caller":"etcdserver/server.go:2476","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2022-06-01T11:24:34.466Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2022-06-01T11:24:34.466Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2022-06-01T11:24:34.466Z","caller":"etcdserver/server.go:2500","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2022-06-01T11:24:34.466Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-06-01T11:24:34.466Z","caller":"etcdmain/main.go:47","msg":"notifying init daemon"}
	{"level":"info","ts":"2022-06-01T11:24:34.466Z","caller":"etcdmain/main.go:53","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2022-06-01T11:24:34.466Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-06-01T11:24:34.467Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2022-06-01T11:24:34.468Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2022-06-01T11:24:34.466Z","caller":"etcdserver/server.go:2027","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:default-k8s-different-port-20220601110654-6708 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	
	* 
	* ==> kernel <==
	*  11:28:55 up  1:11,  0 users,  load average: 0.42, 0.96, 1.55
	Linux default-k8s-different-port-20220601110654-6708 5.13.0-1027-gcp #32~20.04.1-Ubuntu SMP Thu May 26 10:53:08 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.4 LTS"
	
	* 
	* ==> kube-apiserver [a01c09dc992a3fcb76c065eaf6d9a37f822bb84514f98be837fc943d82bc46d3] <==
	* I0601 11:24:37.878098       1 controller.go:611] quota admission added evaluator for: endpoints
	I0601 11:24:37.881316       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0601 11:24:38.445377       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0601 11:24:39.057837       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0601 11:24:39.064636       1 alloc.go:329] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I0601 11:24:39.072879       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0601 11:24:44.165648       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0601 11:24:52.712446       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	I0601 11:24:53.460531       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	I0601 11:24:54.003997       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	I0601 11:24:55.266381       1 alloc.go:329] "allocated clusterIPs" service="kube-system/metrics-server" clusterIPs=map[IPv4:10.98.44.157]
	I0601 11:24:55.591591       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs=map[IPv4:10.107.49.191]
	I0601 11:24:55.601444       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs=map[IPv4:10.111.177.167]
	W0601 11:24:56.161345       1 handler_proxy.go:104] no RequestInfo found in the context
	E0601 11:24:56.161420       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0601 11:24:56.161435       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0601 11:25:56.162132       1 handler_proxy.go:104] no RequestInfo found in the context
	E0601 11:25:56.162192       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0601 11:25:56.162201       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0601 11:27:56.162983       1 handler_proxy.go:104] no RequestInfo found in the context
	E0601 11:27:56.163055       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0601 11:27:56.163065       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	* 
	* ==> kube-controller-manager [d21e78271b81ab20da16b5cd9e947f35b35db3023a93fc154c959b24cd029c28] <==
	* E0601 11:24:55.490243       1 replica_set.go:536] sync "kubernetes-dashboard/dashboard-metrics-scraper-56974995fc" failed with pods "dashboard-metrics-scraper-56974995fc-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0601 11:24:55.553315       1 replica_set.go:536] sync "kubernetes-dashboard/kubernetes-dashboard-8469778f77" failed with pods "kubernetes-dashboard-8469778f77-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0601 11:24:55.553410       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8469778f77" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-8469778f77-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0601 11:24:55.557958       1 replica_set.go:536] sync "kubernetes-dashboard/kubernetes-dashboard-8469778f77" failed with pods "kubernetes-dashboard-8469778f77-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0601 11:24:55.557964       1 replica_set.go:536] sync "kubernetes-dashboard/dashboard-metrics-scraper-56974995fc" failed with pods "dashboard-metrics-scraper-56974995fc-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0601 11:24:55.557973       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-56974995fc" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-56974995fc-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0601 11:24:55.558022       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8469778f77" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-8469778f77-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0601 11:24:55.567269       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8469778f77" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-8469778f77-k8wsb"
	I0601 11:24:55.655697       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-56974995fc" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-56974995fc-p9hc5"
	E0601 11:25:22.732124       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0601 11:25:23.144913       1 garbagecollector.go:707] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0601 11:25:52.749328       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0601 11:25:53.158804       1 garbagecollector.go:707] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0601 11:26:22.769136       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0601 11:26:23.174531       1 garbagecollector.go:707] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0601 11:26:52.787857       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0601 11:26:53.188861       1 garbagecollector.go:707] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0601 11:27:22.805434       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0601 11:27:23.205731       1 garbagecollector.go:707] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0601 11:27:52.825402       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0601 11:27:53.221224       1 garbagecollector.go:707] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0601 11:28:22.842528       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0601 11:28:23.235686       1 garbagecollector.go:707] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0601 11:28:52.852745       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0601 11:28:53.251515       1 garbagecollector.go:707] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	
	* 
	* ==> kube-proxy [2fb746cc75b1d529404d0b3097c5644a162207995ae1736ab99ed2a7508b8ae8] <==
	* I0601 11:24:53.979664       1 node.go:163] Successfully retrieved node IP: 192.168.49.2
	I0601 11:24:53.979726       1 server_others.go:138] "Detected node IP" address="192.168.49.2"
	I0601 11:24:53.979767       1 server_others.go:561] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0601 11:24:54.001129       1 server_others.go:206] "Using iptables Proxier"
	I0601 11:24:54.001171       1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0601 11:24:54.001182       1 server_others.go:214] "Creating dualStackProxier for iptables"
	I0601 11:24:54.001206       1 server_others.go:491] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
	I0601 11:24:54.001552       1 server.go:656] "Version info" version="v1.23.6"
	I0601 11:24:54.002098       1 config.go:226] "Starting endpoint slice config controller"
	I0601 11:24:54.002134       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0601 11:24:54.002209       1 config.go:317] "Starting service config controller"
	I0601 11:24:54.002223       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0601 11:24:54.102778       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	I0601 11:24:54.102782       1 shared_informer.go:247] Caches are synced for service config 
	
	* 
	* ==> kube-scheduler [dd66fe479b71f0dd37f716863c649f5efd7903cab492c2dfddeedc600bf510a0] <==
	* W0601 11:24:36.370197       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0601 11:24:36.370227       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0601 11:24:36.371109       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0601 11:24:36.371140       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0601 11:24:36.371150       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0601 11:24:36.371182       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0601 11:24:36.371361       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0601 11:24:36.371399       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0601 11:24:36.371503       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0601 11:24:36.371527       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0601 11:24:36.371525       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0601 11:24:36.371543       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0601 11:24:37.249365       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0601 11:24:37.249400       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0601 11:24:37.283809       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0601 11:24:37.283835       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0601 11:24:37.285837       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0601 11:24:37.285861       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0601 11:24:37.454156       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0601 11:24:37.454193       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0601 11:24:37.582046       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0601 11:24:37.582079       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0601 11:24:37.582723       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0601 11:24:37.582768       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0601 11:24:37.963656       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Wed 2022-06-01 11:19:53 UTC, end at Wed 2022-06-01 11:28:55 UTC. --
	Jun 01 11:27:48 default-k8s-different-port-20220601110654-6708 kubelet[3056]: E0601 11:27:48.178584    3056 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kindnet-cni pod=kindnet-bzkn8_kube-system(4b32f531-7d27-4ce4-900c-f7566d5281ca)\"" pod="kube-system/kindnet-bzkn8" podUID=4b32f531-7d27-4ce4-900c-f7566d5281ca
	Jun 01 11:27:49 default-k8s-different-port-20220601110654-6708 kubelet[3056]: E0601 11:27:49.403577    3056 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jun 01 11:27:54 default-k8s-different-port-20220601110654-6708 kubelet[3056]: E0601 11:27:54.404843    3056 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jun 01 11:27:59 default-k8s-different-port-20220601110654-6708 kubelet[3056]: E0601 11:27:59.406438    3056 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jun 01 11:28:02 default-k8s-different-port-20220601110654-6708 kubelet[3056]: I0601 11:28:02.178655    3056 scope.go:110] "RemoveContainer" containerID="52ffc7dba8d4e7d84f0ad6c1fb023e6358ed37847f4efb6b4426796aa9cc6f30"
	Jun 01 11:28:02 default-k8s-different-port-20220601110654-6708 kubelet[3056]: E0601 11:28:02.179070    3056 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kindnet-cni pod=kindnet-bzkn8_kube-system(4b32f531-7d27-4ce4-900c-f7566d5281ca)\"" pod="kube-system/kindnet-bzkn8" podUID=4b32f531-7d27-4ce4-900c-f7566d5281ca
	Jun 01 11:28:04 default-k8s-different-port-20220601110654-6708 kubelet[3056]: E0601 11:28:04.407391    3056 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jun 01 11:28:09 default-k8s-different-port-20220601110654-6708 kubelet[3056]: E0601 11:28:09.409123    3056 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jun 01 11:28:14 default-k8s-different-port-20220601110654-6708 kubelet[3056]: E0601 11:28:14.410591    3056 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jun 01 11:28:16 default-k8s-different-port-20220601110654-6708 kubelet[3056]: I0601 11:28:16.178421    3056 scope.go:110] "RemoveContainer" containerID="52ffc7dba8d4e7d84f0ad6c1fb023e6358ed37847f4efb6b4426796aa9cc6f30"
	Jun 01 11:28:16 default-k8s-different-port-20220601110654-6708 kubelet[3056]: E0601 11:28:16.178705    3056 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kindnet-cni pod=kindnet-bzkn8_kube-system(4b32f531-7d27-4ce4-900c-f7566d5281ca)\"" pod="kube-system/kindnet-bzkn8" podUID=4b32f531-7d27-4ce4-900c-f7566d5281ca
	Jun 01 11:28:19 default-k8s-different-port-20220601110654-6708 kubelet[3056]: E0601 11:28:19.411841    3056 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jun 01 11:28:24 default-k8s-different-port-20220601110654-6708 kubelet[3056]: E0601 11:28:24.413427    3056 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jun 01 11:28:28 default-k8s-different-port-20220601110654-6708 kubelet[3056]: I0601 11:28:28.178825    3056 scope.go:110] "RemoveContainer" containerID="52ffc7dba8d4e7d84f0ad6c1fb023e6358ed37847f4efb6b4426796aa9cc6f30"
	Jun 01 11:28:28 default-k8s-different-port-20220601110654-6708 kubelet[3056]: E0601 11:28:28.179130    3056 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kindnet-cni pod=kindnet-bzkn8_kube-system(4b32f531-7d27-4ce4-900c-f7566d5281ca)\"" pod="kube-system/kindnet-bzkn8" podUID=4b32f531-7d27-4ce4-900c-f7566d5281ca
	Jun 01 11:28:29 default-k8s-different-port-20220601110654-6708 kubelet[3056]: E0601 11:28:29.414778    3056 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jun 01 11:28:34 default-k8s-different-port-20220601110654-6708 kubelet[3056]: E0601 11:28:34.416108    3056 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jun 01 11:28:39 default-k8s-different-port-20220601110654-6708 kubelet[3056]: E0601 11:28:39.417321    3056 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jun 01 11:28:41 default-k8s-different-port-20220601110654-6708 kubelet[3056]: I0601 11:28:41.178645    3056 scope.go:110] "RemoveContainer" containerID="52ffc7dba8d4e7d84f0ad6c1fb023e6358ed37847f4efb6b4426796aa9cc6f30"
	Jun 01 11:28:44 default-k8s-different-port-20220601110654-6708 kubelet[3056]: E0601 11:28:44.418886    3056 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jun 01 11:28:49 default-k8s-different-port-20220601110654-6708 kubelet[3056]: E0601 11:28:49.420056    3056 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jun 01 11:28:51 default-k8s-different-port-20220601110654-6708 kubelet[3056]: I0601 11:28:51.692523    3056 scope.go:110] "RemoveContainer" containerID="52ffc7dba8d4e7d84f0ad6c1fb023e6358ed37847f4efb6b4426796aa9cc6f30"
	Jun 01 11:28:51 default-k8s-different-port-20220601110654-6708 kubelet[3056]: I0601 11:28:51.692920    3056 scope.go:110] "RemoveContainer" containerID="7c21d514121895b0dbb3e8edace6db1999e4a2588c3d100ca15a3d28276ae8a3"
	Jun 01 11:28:51 default-k8s-different-port-20220601110654-6708 kubelet[3056]: E0601 11:28:51.693257    3056 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kindnet-cni pod=kindnet-bzkn8_kube-system(4b32f531-7d27-4ce4-900c-f7566d5281ca)\"" pod="kube-system/kindnet-bzkn8" podUID=4b32f531-7d27-4ce4-900c-f7566d5281ca
	Jun 01 11:28:54 default-k8s-different-port-20220601110654-6708 kubelet[3056]: E0601 11:28:54.421198    3056 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-different-port-20220601110654-6708 -n default-k8s-different-port-20220601110654-6708

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/SecondStart
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-different-port-20220601110654-6708 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:270: non-running pods: coredns-64897985d-xtfld metrics-server-b955d9d8-qgk2q storage-provisioner dashboard-metrics-scraper-56974995fc-p9hc5 kubernetes-dashboard-8469778f77-k8wsb
helpers_test.go:272: ======> post-mortem[TestStartStop/group/default-k8s-different-port/serial/SecondStart]: describe non-running pods <======
helpers_test.go:275: (dbg) Run:  kubectl --context default-k8s-different-port-20220601110654-6708 describe pod coredns-64897985d-xtfld metrics-server-b955d9d8-qgk2q storage-provisioner dashboard-metrics-scraper-56974995fc-p9hc5 kubernetes-dashboard-8469778f77-k8wsb
helpers_test.go:275: (dbg) Non-zero exit: kubectl --context default-k8s-different-port-20220601110654-6708 describe pod coredns-64897985d-xtfld metrics-server-b955d9d8-qgk2q storage-provisioner dashboard-metrics-scraper-56974995fc-p9hc5 kubernetes-dashboard-8469778f77-k8wsb: exit status 1 (54.241179ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-64897985d-xtfld" not found
	Error from server (NotFound): pods "metrics-server-b955d9d8-qgk2q" not found
	Error from server (NotFound): pods "storage-provisioner" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-56974995fc-p9hc5" not found
	Error from server (NotFound): pods "kubernetes-dashboard-8469778f77-k8wsb" not found

                                                
                                                
** /stderr **
helpers_test.go:277: kubectl --context default-k8s-different-port-20220601110654-6708 describe pod coredns-64897985d-xtfld metrics-server-b955d9d8-qgk2q storage-provisioner dashboard-metrics-scraper-56974995fc-p9hc5 kubernetes-dashboard-8469778f77-k8wsb: exit status 1
--- FAIL: TestStartStop/group/default-k8s-different-port/serial/SecondStart (543.35s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (542.36s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:276: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:342: "kubernetes-dashboard-6fb5469cf5-8d9mk" [d13fc9c2-161d-4ea9-940d-81951faec2fc] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.)
E0601 11:21:53.084932    6708 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/calico-20220601104839-6708/client.crt: no such file or directory
E0601 11:22:04.948898    6708 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/functional-20220601102456-6708/client.crt: no such file or directory
E0601 11:22:12.929185    6708 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/addons-20220601102024-6708/client.crt: no such file or directory
E0601 11:22:21.871025    6708 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/functional-20220601102456-6708/client.crt: no such file or directory
E0601 11:22:54.651979    6708 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/enable-default-cni-20220601104837-6708/client.crt: no such file or directory
E0601 11:23:34.904745    6708 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/bridge-20220601104837-6708/client.crt: no such file or directory
E0601 11:24:09.242468    6708 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/calico-20220601104839-6708/client.crt: no such file or directory
E0601 11:24:14.134097    6708 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/ingress-addon-legacy-20220601102806-6708/client.crt: no such file or directory
E0601 11:24:22.035014    6708 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/cilium-20220601104839-6708/client.crt: no such file or directory
E0601 11:24:31.086857    6708 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/ingress-addon-legacy-20220601102806-6708/client.crt: no such file or directory
E0601 11:24:36.926105    6708 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/calico-20220601104839-6708/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
E0601 11:29:09.241755    6708 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/calico-20220601104839-6708/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
E0601 11:29:22.034887    6708 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/cilium-20220601104839-6708/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
E0601 11:29:31.087163    6708 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/ingress-addon-legacy-20220601102806-6708/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
E0601 11:30:40.380254    6708 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/no-preload-20220601105939-6708/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:276: ***** TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: timed out waiting for the condition ****
start_stop_delete_test.go:276: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-20220601105850-6708 -n old-k8s-version-20220601105850-6708
start_stop_delete_test.go:276: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2022-06-01 11:30:50.129186836 +0000 UTC m=+4251.128807077
start_stop_delete_test.go:276: (dbg) Run:  kubectl --context old-k8s-version-20220601105850-6708 describe po kubernetes-dashboard-6fb5469cf5-8d9mk -n kubernetes-dashboard
start_stop_delete_test.go:276: (dbg) Non-zero exit: kubectl --context old-k8s-version-20220601105850-6708 describe po kubernetes-dashboard-6fb5469cf5-8d9mk -n kubernetes-dashboard: context deadline exceeded (1.34µs)
start_stop_delete_test.go:276: kubectl --context old-k8s-version-20220601105850-6708 describe po kubernetes-dashboard-6fb5469cf5-8d9mk -n kubernetes-dashboard: context deadline exceeded
start_stop_delete_test.go:276: (dbg) Run:  kubectl --context old-k8s-version-20220601105850-6708 logs kubernetes-dashboard-6fb5469cf5-8d9mk -n kubernetes-dashboard
start_stop_delete_test.go:276: (dbg) Non-zero exit: kubectl --context old-k8s-version-20220601105850-6708 logs kubernetes-dashboard-6fb5469cf5-8d9mk -n kubernetes-dashboard: context deadline exceeded (308ns)
start_stop_delete_test.go:276: kubectl --context old-k8s-version-20220601105850-6708 logs kubernetes-dashboard-6fb5469cf5-8d9mk -n kubernetes-dashboard: context deadline exceeded
start_stop_delete_test.go:277: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: timed out waiting for the condition
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-20220601105850-6708
helpers_test.go:235: (dbg) docker inspect old-k8s-version-20220601105850-6708:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "3b070aceb31160fc5815dbff4454d13f7f86eaa9a885ff77acd0f313e36673c0",
	        "Created": "2022-06-01T10:59:00.78565124Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 255104,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-06-01T11:11:54.443188139Z",
	            "FinishedAt": "2022-06-01T11:11:52.690867678Z"
	        },
	        "Image": "sha256:5fc9565d342f677dd8987c0c7656d8d58147ab45c932c9076935b38a770e4cf7",
	        "ResolvConfPath": "/var/lib/docker/containers/3b070aceb31160fc5815dbff4454d13f7f86eaa9a885ff77acd0f313e36673c0/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/3b070aceb31160fc5815dbff4454d13f7f86eaa9a885ff77acd0f313e36673c0/hostname",
	        "HostsPath": "/var/lib/docker/containers/3b070aceb31160fc5815dbff4454d13f7f86eaa9a885ff77acd0f313e36673c0/hosts",
	        "LogPath": "/var/lib/docker/containers/3b070aceb31160fc5815dbff4454d13f7f86eaa9a885ff77acd0f313e36673c0/3b070aceb31160fc5815dbff4454d13f7f86eaa9a885ff77acd0f313e36673c0-json.log",
	        "Name": "/old-k8s-version-20220601105850-6708",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-20220601105850-6708:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-20220601105850-6708",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/54107ef3bba69957e2904f92616edbbee2b856262d277e35dd0855c862772266-init/diff:/var/lib/docker/overlay2/b8d8db1a2aa7e68bc30cc8784920412f67de75d287f26f41df14fe77e9cd01aa/diff:/var/lib/docker/overlay2/e05914bc392f34941f075ddf1f08984ba653e41980a3a12b3f0ec7bc66faed2c/diff:/var/lib/docker/overlay2/311290be1e68cfaf248422398f2ee037b662e83e8a80c393cf1f35af46c871e3/diff:/var/lib/docker/overlay2/df5bd86ed0e0c9740e1a389f0dd435837b730043ff123ee198aa28f82511f21c/diff:/var/lib/docker/overlay2/52ebf0baed5a1f4378d84fd6b94c9c1756fef7c7837d0766f77ef29b2104f39c/diff:/var/lib/docker/overlay2/19fdc64f1948c51309fb85761a84e7787a06f91f43e3d554c0507c5291bda802/diff:/var/lib/docker/overlay2/8fbe94a5f5ec7e7b87e3294d917aaba04c0d5a1072a4876ca3171f7b416e78d1/diff:/var/lib/docker/overlay2/3f2aec9d3b6e64c9251bd84e36639908a944a57741768984d5a66f5f34ed2288/diff:/var/lib/docker/overlay2/f4b59ae0a52d88a0325a281689c43f769408e14f70b9c94b771e70af140d7538/diff:/var/lib/docker/overlay2/1b9610
0ef86fc38fba7ce796c89f6a0f0c16c7fc1a94ff1a7f2021a01dd5471e/diff:/var/lib/docker/overlay2/10388fac28961ac0e67628b98602c8c77b04d12b21cd25a8d9adc05c1261252b/diff:/var/lib/docker/overlay2/efcc44ba0e0e6acd23e74db49106211785c2519f22b858252b43b17e927095e4/diff:/var/lib/docker/overlay2/3dbc9b9ec4e689c631fadb88ec67ad1f44f1059642f0d9e9740fa4523681476a/diff:/var/lib/docker/overlay2/196519dbdfaceb51fe77bd0517b4a726c63133e2f1a44cc71539d859f920996f/diff:/var/lib/docker/overlay2/bf269014ddb2ded291bc1f7a299a168567e8ffb016d7d4ba5ad3681eb1c988ba/diff:/var/lib/docker/overlay2/dc3a1c86dd14e2cba77aa4a9424d61aa34efc817c2c14b8f79d66fb1c19132ea/diff:/var/lib/docker/overlay2/7151cfa81a30a8e2bacf0e7e97eb21e825844e9ee99d4d774c443cf4df3042bf/diff:/var/lib/docker/overlay2/4827c7b9079e215b353e51dd539d7e28bf73341ea0a494df4654e9fd1b53d16c/diff:/var/lib/docker/overlay2/2da0eda04306aaeacd19a5cc85b885230b88e74f1791bdb022ebf4b39d85fae2/diff:/var/lib/docker/overlay2/1a0bdf8971fb0a44ff0e168623dbbb2d84b8e93f6d20d9351efd68470e4d4851/diff:/var/lib/d
ocker/overlay2/9ced6f582fc4ce00064f1f5e6ac922d838648fe94afc8e015c309e04004f10ca/diff:/var/lib/docker/overlay2/dd6d4c3166eb565aff2516953d5a8877a204214f2436d414475132ae70429cf7/diff:/var/lib/docker/overlay2/a1ace060e85891d54b26ff4be9fcbce36ffbb15cc4061eb4ccf0add8f82783df/diff:/var/lib/docker/overlay2/bc8b93bfba93e7da2c573ae8b6405ebff526153f6a8b0659aebaf044dc7e8f43/diff:/var/lib/docker/overlay2/c6292624b658b5761ddc277e4b50f1bd9d32fb9a2ad4a01647d6482fa0d29eb3/diff:/var/lib/docker/overlay2/cfe8f35eeb0a80a3c747eac0dfd9195da1fa3d9f92f0b7866d46f3517f3b10ee/diff:/var/lib/docker/overlay2/90bc3b9378b5761de7575f1a82d48e4b4ebf50af153eafa5a565585e136b87f8/diff:/var/lib/docker/overlay2/e1b928a2483870df7fdf4adb3b4002f9effe1db7fbff925b24005f47290b2915/diff:/var/lib/docker/overlay2/4758c75ab63fd3f43ae7b654bc93566ded69acea0e92caf3043ef6eeeec9ca1b/diff:/var/lib/docker/overlay2/8558da5809877030d87982052e5011fb04b8bb9646d0f3c1d4aa10f2d7926592/diff:/var/lib/docker/overlay2/f6400cfae811f55736f55064c8949e18ac2dc1175a5149bb0382a79fa92
4f3f3/diff:/var/lib/docker/overlay2/875d5278ff8445b84261d4586c6a14fbbd9c13beff1fe9252591162e4d91a0dc/diff:/var/lib/docker/overlay2/12d9f85a229e1386e37fb609020fdcb5535c67ce811012fd646182559e4ee754/diff:/var/lib/docker/overlay2/d5c5b85272d7a8b0b62da488c8cba8d883a0adcfc9f1b2f6ad2f856b4f13e5f7/diff:/var/lib/docker/overlay2/5f6feb9e059c22491e287804f38f097fda984dc9376ba28ae810e13dcf27394f/diff:/var/lib/docker/overlay2/113a715b1135f09b959295e960aeaa36846ad54a6fe46fdd53d061bc3fe114e3/diff:/var/lib/docker/overlay2/265096a57a98130b8073aa41d0a22f0ada5a391943e490ac1c281a634e22cba0/diff:/var/lib/docker/overlay2/15f57e6dc9a6b382240a9335aae067454048b2eb6d0094f2d3c8c115be34311a/diff:/var/lib/docker/overlay2/45ca8d44d46a41b4493f52c0048d08b8d5ff4c1b28233ab8940f6422df1f1273/diff:/var/lib/docker/overlay2/d555005a13c251ef928389a1b06d251a17378cf3ec68af5a4d6c849412d3f69f/diff:/var/lib/docker/overlay2/80b8c06850dacfe2ca4db4e0fa4d2d0dd6997cf495a7f7da9b95a996694144c5/diff",
	                "MergedDir": "/var/lib/docker/overlay2/54107ef3bba69957e2904f92616edbbee2b856262d277e35dd0855c862772266/merged",
	                "UpperDir": "/var/lib/docker/overlay2/54107ef3bba69957e2904f92616edbbee2b856262d277e35dd0855c862772266/diff",
	                "WorkDir": "/var/lib/docker/overlay2/54107ef3bba69957e2904f92616edbbee2b856262d277e35dd0855c862772266/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-20220601105850-6708",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-20220601105850-6708/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-20220601105850-6708",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-20220601105850-6708",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-20220601105850-6708",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "0960bb8f97b755414eac0338bfc1078877300285cb015d048bc6cd05ee3ed170",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49422"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49421"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49418"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49420"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49419"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/0960bb8f97b7",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-20220601105850-6708": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.58.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "3b070aceb311",
	                        "old-k8s-version-20220601105850-6708"
	                    ],
	                    "NetworkID": "99443bab5d3fa350d07dfff0b6c1624f2cd2601ac21b76ee77d57de53df02f62",
	                    "EndpointID": "74753f08c4bc626a78cf7d97ad5a40c516e6b8e6d55bde671c073b80db81c952",
	                    "Gateway": "192.168.58.1",
	                    "IPAddress": "192.168.58.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:3a:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-20220601105850-6708 -n old-k8s-version-20220601105850-6708
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-20220601105850-6708 logs -n 25
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------------------------------|------------------------------------------------|---------|----------------|---------------------|---------------------|
	| Command |                            Args                            |                    Profile                     |  User   |    Version     |     Start Time      |      End Time       |
	|---------|------------------------------------------------------------|------------------------------------------------|---------|----------------|---------------------|---------------------|
	| logs    | calico-20220601104839-6708                                 | calico-20220601104839-6708                     | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:14 UTC | 01 Jun 22 11:14 UTC |
	|         | logs -n 25                                                 |                                                |         |                |                     |                     |
	| delete  | -p calico-20220601104839-6708                              | calico-20220601104839-6708                     | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:14 UTC | 01 Jun 22 11:14 UTC |
	| start   | -p newest-cni-20220601111420-6708 --memory=2200            | newest-cni-20220601111420-6708                 | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:14 UTC | 01 Jun 22 11:15 UTC |
	|         | --alsologtostderr --wait=apiserver,system_pods,default_sa  |                                                |         |                |                     |                     |
	|         | --feature-gates ServerSideApply=true --network-plugin=cni  |                                                |         |                |                     |                     |
	|         | --extra-config=kubelet.network-plugin=cni                  |                                                |         |                |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |                                                |         |                |                     |                     |
	|         | --driver=docker  --container-runtime=containerd            |                                                |         |                |                     |                     |
	|         | --kubernetes-version=v1.23.6                               |                                                |         |                |                     |                     |
	| addons  | enable metrics-server -p                                   | newest-cni-20220601111420-6708                 | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:15 UTC | 01 Jun 22 11:15 UTC |
	|         | newest-cni-20220601111420-6708                             |                                                |         |                |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                                                |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                     |                                                |         |                |                     |                     |
	| stop    | -p                                                         | newest-cni-20220601111420-6708                 | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:15 UTC | 01 Jun 22 11:15 UTC |
	|         | newest-cni-20220601111420-6708                             |                                                |         |                |                     |                     |
	|         | --alsologtostderr -v=3                                     |                                                |         |                |                     |                     |
	| addons  | enable dashboard -p                                        | newest-cni-20220601111420-6708                 | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:15 UTC | 01 Jun 22 11:15 UTC |
	|         | newest-cni-20220601111420-6708                             |                                                |         |                |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                |         |                |                     |                     |
	| start   | -p newest-cni-20220601111420-6708 --memory=2200            | newest-cni-20220601111420-6708                 | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:15 UTC | 01 Jun 22 11:15 UTC |
	|         | --alsologtostderr --wait=apiserver,system_pods,default_sa  |                                                |         |                |                     |                     |
	|         | --feature-gates ServerSideApply=true --network-plugin=cni  |                                                |         |                |                     |                     |
	|         | --extra-config=kubelet.network-plugin=cni                  |                                                |         |                |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |                                                |         |                |                     |                     |
	|         | --driver=docker  --container-runtime=containerd            |                                                |         |                |                     |                     |
	|         | --kubernetes-version=v1.23.6                               |                                                |         |                |                     |                     |
	| ssh     | -p                                                         | newest-cni-20220601111420-6708                 | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:15 UTC | 01 Jun 22 11:15 UTC |
	|         | newest-cni-20220601111420-6708                             |                                                |         |                |                     |                     |
	|         | sudo crictl images -o json                                 |                                                |         |                |                     |                     |
	| pause   | -p                                                         | newest-cni-20220601111420-6708                 | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:15 UTC | 01 Jun 22 11:15 UTC |
	|         | newest-cni-20220601111420-6708                             |                                                |         |                |                     |                     |
	|         | --alsologtostderr -v=1                                     |                                                |         |                |                     |                     |
	| unpause | -p                                                         | newest-cni-20220601111420-6708                 | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:15 UTC | 01 Jun 22 11:16 UTC |
	|         | newest-cni-20220601111420-6708                             |                                                |         |                |                     |                     |
	|         | --alsologtostderr -v=1                                     |                                                |         |                |                     |                     |
	| delete  | -p                                                         | newest-cni-20220601111420-6708                 | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:16 UTC | 01 Jun 22 11:16 UTC |
	|         | newest-cni-20220601111420-6708                             |                                                |         |                |                     |                     |
	| delete  | -p                                                         | newest-cni-20220601111420-6708                 | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:16 UTC | 01 Jun 22 11:16 UTC |
	|         | newest-cni-20220601111420-6708                             |                                                |         |                |                     |                     |
	| logs    | embed-certs-20220601110327-6708                            | embed-certs-20220601110327-6708                | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:16 UTC | 01 Jun 22 11:16 UTC |
	|         | logs -n 25                                                 |                                                |         |                |                     |                     |
	| logs    | embed-certs-20220601110327-6708                            | embed-certs-20220601110327-6708                | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:16 UTC | 01 Jun 22 11:16 UTC |
	|         | logs -n 25                                                 |                                                |         |                |                     |                     |
	| addons  | enable metrics-server -p                                   | embed-certs-20220601110327-6708                | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:16 UTC | 01 Jun 22 11:16 UTC |
	|         | embed-certs-20220601110327-6708                            |                                                |         |                |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                                                |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                     |                                                |         |                |                     |                     |
	| stop    | -p                                                         | embed-certs-20220601110327-6708                | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:16 UTC | 01 Jun 22 11:16 UTC |
	|         | embed-certs-20220601110327-6708                            |                                                |         |                |                     |                     |
	|         | --alsologtostderr -v=3                                     |                                                |         |                |                     |                     |
	| addons  | enable dashboard -p                                        | embed-certs-20220601110327-6708                | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:16 UTC | 01 Jun 22 11:16 UTC |
	|         | embed-certs-20220601110327-6708                            |                                                |         |                |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                |         |                |                     |                     |
	| logs    | default-k8s-different-port-20220601110654-6708             | default-k8s-different-port-20220601110654-6708 | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:19 UTC | 01 Jun 22 11:19 UTC |
	|         | logs -n 25                                                 |                                                |         |                |                     |                     |
	| logs    | default-k8s-different-port-20220601110654-6708             | default-k8s-different-port-20220601110654-6708 | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:19 UTC | 01 Jun 22 11:19 UTC |
	|         | logs -n 25                                                 |                                                |         |                |                     |                     |
	| addons  | enable metrics-server -p                                   | default-k8s-different-port-20220601110654-6708 | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:19 UTC | 01 Jun 22 11:19 UTC |
	|         | default-k8s-different-port-20220601110654-6708             |                                                |         |                |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                                                |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                     |                                                |         |                |                     |                     |
	| stop    | -p                                                         | default-k8s-different-port-20220601110654-6708 | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:19 UTC | 01 Jun 22 11:19 UTC |
	|         | default-k8s-different-port-20220601110654-6708             |                                                |         |                |                     |                     |
	|         | --alsologtostderr -v=3                                     |                                                |         |                |                     |                     |
	| addons  | enable dashboard -p                                        | default-k8s-different-port-20220601110654-6708 | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:19 UTC | 01 Jun 22 11:19 UTC |
	|         | default-k8s-different-port-20220601110654-6708             |                                                |         |                |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                |         |                |                     |                     |
	| logs    | old-k8s-version-20220601105850-6708                        | old-k8s-version-20220601105850-6708            | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:21 UTC | 01 Jun 22 11:21 UTC |
	|         | logs -n 25                                                 |                                                |         |                |                     |                     |
	| logs    | embed-certs-20220601110327-6708                            | embed-certs-20220601110327-6708                | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:25 UTC | 01 Jun 22 11:25 UTC |
	|         | logs -n 25                                                 |                                                |         |                |                     |                     |
	| logs    | default-k8s-different-port-20220601110654-6708             | default-k8s-different-port-20220601110654-6708 | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:28 UTC | 01 Jun 22 11:28 UTC |
	|         | logs -n 25                                                 |                                                |         |                |                     |                     |
	|---------|------------------------------------------------------------|------------------------------------------------|---------|----------------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/06/01 11:19:52
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.18.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0601 11:19:52.827023  276679 out.go:296] Setting OutFile to fd 1 ...
	I0601 11:19:52.827225  276679 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0601 11:19:52.827237  276679 out.go:309] Setting ErrFile to fd 2...
	I0601 11:19:52.827242  276679 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0601 11:19:52.827359  276679 root.go:322] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/bin
	I0601 11:19:52.827588  276679 out.go:303] Setting JSON to false
	I0601 11:19:52.828890  276679 start.go:115] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":3747,"bootTime":1654078646,"procs":456,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.13.0-1027-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0601 11:19:52.828955  276679 start.go:125] virtualization: kvm guest
	I0601 11:19:52.831944  276679 out.go:177] * [default-k8s-different-port-20220601110654-6708] minikube v1.26.0-beta.1 on Ubuntu 20.04 (kvm/amd64)
	I0601 11:19:52.833439  276679 out.go:177]   - MINIKUBE_LOCATION=14079
	I0601 11:19:52.833372  276679 notify.go:193] Checking for updates...
	I0601 11:19:52.835007  276679 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0601 11:19:52.836578  276679 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig
	I0601 11:19:52.837966  276679 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube
	I0601 11:19:52.839440  276679 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0601 11:19:52.841215  276679 config.go:178] Loaded profile config "default-k8s-different-port-20220601110654-6708": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.6
	I0601 11:19:52.841578  276679 driver.go:358] Setting default libvirt URI to qemu:///system
	I0601 11:19:52.880823  276679 docker.go:137] docker version: linux-20.10.16
	I0601 11:19:52.880897  276679 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0601 11:19:52.978177  276679 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:76 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:44 OomKillDisable:true NGoroutines:44 SystemTime:2022-06-01 11:19:52.908721136 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1027-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662795776 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:20.10.16 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0601 11:19:52.978275  276679 docker.go:254] overlay module found
	I0601 11:19:52.981078  276679 out.go:177] * Using the docker driver based on existing profile
	I0601 11:19:52.982316  276679 start.go:284] selected driver: docker
	I0601 11:19:52.982326  276679 start.go:806] validating driver "docker" against &{Name:default-k8s-different-port-20220601110654-6708 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:default-k8s-different-port-
20220601110654-6708 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8444 KubernetesVersion:v1.23.6 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.5.1@sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true
system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0601 11:19:52.982412  276679 start.go:817] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0601 11:19:52.983242  276679 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0601 11:19:53.085320  276679 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:76 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:44 OomKillDisable:true NGoroutines:44 SystemTime:2022-06-01 11:19:53.012439643 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1027-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662795776 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:20.10.16 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0601 11:19:53.085561  276679 start_flags.go:847] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0601 11:19:53.085581  276679 cni.go:95] Creating CNI manager for ""
	I0601 11:19:53.085589  276679 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0601 11:19:53.085608  276679 start_flags.go:306] config:
	{Name:default-k8s-different-port-20220601110654-6708 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:default-k8s-different-port-20220601110654-6708 Namespace:default APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8444 KubernetesVersion:v1.23.6 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.5.1@sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] Lis
tenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0601 11:19:53.088575  276679 out.go:177] * Starting control plane node default-k8s-different-port-20220601110654-6708 in cluster default-k8s-different-port-20220601110654-6708
	I0601 11:19:53.089964  276679 cache.go:120] Beginning downloading kic base image for docker with containerd
	I0601 11:19:53.091501  276679 out.go:177] * Pulling base image ...
	I0601 11:19:53.092800  276679 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime containerd
	I0601 11:19:53.092839  276679 preload.go:148] Found local preload: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.6-containerd-overlay2-amd64.tar.lz4
	I0601 11:19:53.092856  276679 cache.go:57] Caching tarball of preloaded images
	I0601 11:19:53.092897  276679 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a in local docker daemon
	I0601 11:19:53.093061  276679 preload.go:174] Found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.6-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0601 11:19:53.093076  276679 cache.go:60] Finished verifying existence of preloaded tar for  v1.23.6 on containerd
	I0601 11:19:53.093182  276679 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/default-k8s-different-port-20220601110654-6708/config.json ...
	I0601 11:19:53.136384  276679 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a in local docker daemon, skipping pull
	I0601 11:19:53.136410  276679 cache.go:141] gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a exists in daemon, skipping load
	I0601 11:19:53.136424  276679 cache.go:206] Successfully downloaded all kic artifacts
	I0601 11:19:53.136454  276679 start.go:352] acquiring machines lock for default-k8s-different-port-20220601110654-6708: {Name:mk7500f636009412c286b3a5b3a2182fb6b229b2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0601 11:19:53.136550  276679 start.go:356] acquired machines lock for "default-k8s-different-port-20220601110654-6708" in 69.025µs
	I0601 11:19:53.136570  276679 start.go:94] Skipping create...Using existing machine configuration
	I0601 11:19:53.136577  276679 fix.go:55] fixHost starting: 
	I0601 11:19:53.137208  276679 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220601110654-6708 --format={{.State.Status}}
	I0601 11:19:53.168642  276679 fix.go:103] recreateIfNeeded on default-k8s-different-port-20220601110654-6708: state=Stopped err=<nil>
	W0601 11:19:53.168681  276679 fix.go:129] unexpected machine state, will restart: <nil>
	I0601 11:19:53.170972  276679 out.go:177] * Restarting existing docker container for "default-k8s-different-port-20220601110654-6708" ...
	I0601 11:19:50.719789  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:19:53.220276  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:19:53.243194  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:19:55.243470  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:19:53.172500  276679 cli_runner.go:164] Run: docker start default-k8s-different-port-20220601110654-6708
	I0601 11:19:53.580842  276679 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220601110654-6708 --format={{.State.Status}}
	I0601 11:19:53.615796  276679 kic.go:416] container "default-k8s-different-port-20220601110654-6708" state is running.
	I0601 11:19:53.616193  276679 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-different-port-20220601110654-6708
	I0601 11:19:53.647308  276679 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/default-k8s-different-port-20220601110654-6708/config.json ...
	I0601 11:19:53.647503  276679 machine.go:88] provisioning docker machine ...
	I0601 11:19:53.647526  276679 ubuntu.go:169] provisioning hostname "default-k8s-different-port-20220601110654-6708"
	I0601 11:19:53.647560  276679 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220601110654-6708
	I0601 11:19:53.679842  276679 main.go:134] libmachine: Using SSH client type: native
	I0601 11:19:53.680106  276679 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7da240] 0x7dd2a0 <nil>  [] 0s} 127.0.0.1 49442 <nil> <nil>}
	I0601 11:19:53.680131  276679 main.go:134] libmachine: About to run SSH command:
	sudo hostname default-k8s-different-port-20220601110654-6708 && echo "default-k8s-different-port-20220601110654-6708" | sudo tee /etc/hostname
	I0601 11:19:53.680742  276679 main.go:134] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:55946->127.0.0.1:49442: read: connection reset by peer
	I0601 11:19:56.807880  276679 main.go:134] libmachine: SSH cmd err, output: <nil>: default-k8s-different-port-20220601110654-6708
	
	I0601 11:19:56.807951  276679 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220601110654-6708
	I0601 11:19:56.839321  276679 main.go:134] libmachine: Using SSH client type: native
	I0601 11:19:56.839475  276679 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7da240] 0x7dd2a0 <nil>  [] 0s} 127.0.0.1 49442 <nil> <nil>}
	I0601 11:19:56.839510  276679 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-different-port-20220601110654-6708' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-different-port-20220601110654-6708/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-different-port-20220601110654-6708' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0601 11:19:56.951445  276679 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0601 11:19:56.951473  276679 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/c
erts/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube}
	I0601 11:19:56.951491  276679 ubuntu.go:177] setting up certificates
	I0601 11:19:56.951499  276679 provision.go:83] configureAuth start
	I0601 11:19:56.951539  276679 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-different-port-20220601110654-6708
	I0601 11:19:56.982392  276679 provision.go:138] copyHostCerts
	I0601 11:19:56.982451  276679 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.pem, removing ...
	I0601 11:19:56.982464  276679 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.pem
	I0601 11:19:56.982537  276679 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.pem (1082 bytes)
	I0601 11:19:56.982652  276679 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cert.pem, removing ...
	I0601 11:19:56.982664  276679 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cert.pem
	I0601 11:19:56.982697  276679 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cert.pem (1123 bytes)
	I0601 11:19:56.982789  276679 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/key.pem, removing ...
	I0601 11:19:56.982802  276679 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/key.pem
	I0601 11:19:56.982829  276679 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/key.pem (1679 bytes)
	I0601 11:19:56.982876  276679 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca-key.pem org=jenkins.default-k8s-different-port-20220601110654-6708 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube default-k8s-different-port-20220601110654-6708]
	I0601 11:19:57.067574  276679 provision.go:172] copyRemoteCerts
	I0601 11:19:57.067626  276679 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0601 11:19:57.067654  276679 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220601110654-6708
	I0601 11:19:57.098669  276679 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49442 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/default-k8s-different-port-20220601110654-6708/id_rsa Username:docker}
	I0601 11:19:57.182904  276679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0601 11:19:57.199734  276679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/server.pem --> /etc/docker/server.pem (1306 bytes)
	I0601 11:19:57.215838  276679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0601 11:19:57.232284  276679 provision.go:86] duration metric: configureAuth took 280.774927ms
	I0601 11:19:57.232312  276679 ubuntu.go:193] setting minikube options for container-runtime
	I0601 11:19:57.232468  276679 config.go:178] Loaded profile config "default-k8s-different-port-20220601110654-6708": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.6
	I0601 11:19:57.232480  276679 machine.go:91] provisioned docker machine in 3.584963826s
	I0601 11:19:57.232486  276679 start.go:306] post-start starting for "default-k8s-different-port-20220601110654-6708" (driver="docker")
	I0601 11:19:57.232492  276679 start.go:316] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0601 11:19:57.232530  276679 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0601 11:19:57.232572  276679 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220601110654-6708
	I0601 11:19:57.265048  276679 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49442 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/default-k8s-different-port-20220601110654-6708/id_rsa Username:docker}
	I0601 11:19:57.351029  276679 ssh_runner.go:195] Run: cat /etc/os-release
	I0601 11:19:57.353646  276679 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0601 11:19:57.353677  276679 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0601 11:19:57.353687  276679 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0601 11:19:57.353695  276679 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0601 11:19:57.353706  276679 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/addons for local assets ...
	I0601 11:19:57.353765  276679 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files for local assets ...
	I0601 11:19:57.353858  276679 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files/etc/ssl/certs/67082.pem -> 67082.pem in /etc/ssl/certs
	I0601 11:19:57.353951  276679 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0601 11:19:57.360153  276679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files/etc/ssl/certs/67082.pem --> /etc/ssl/certs/67082.pem (1708 bytes)
	I0601 11:19:57.376881  276679 start.go:309] post-start completed in 144.384989ms
	I0601 11:19:57.376932  276679 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0601 11:19:57.376962  276679 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220601110654-6708
	I0601 11:19:57.411118  276679 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49442 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/default-k8s-different-port-20220601110654-6708/id_rsa Username:docker}
	I0601 11:19:57.496188  276679 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0601 11:19:57.499982  276679 fix.go:57] fixHost completed within 4.363400058s
	I0601 11:19:57.500005  276679 start.go:81] releasing machines lock for "default-k8s-different-port-20220601110654-6708", held for 4.363442227s
	I0601 11:19:57.500082  276679 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-different-port-20220601110654-6708
	I0601 11:19:57.532057  276679 ssh_runner.go:195] Run: systemctl --version
	I0601 11:19:57.532107  276679 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220601110654-6708
	I0601 11:19:57.532107  276679 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0601 11:19:57.532168  276679 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220601110654-6708
	I0601 11:19:57.567039  276679 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49442 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/default-k8s-different-port-20220601110654-6708/id_rsa Username:docker}
	I0601 11:19:57.567550  276679 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49442 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/default-k8s-different-port-20220601110654-6708/id_rsa Username:docker}
	I0601 11:19:57.677865  276679 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0601 11:19:57.688848  276679 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0601 11:19:57.697588  276679 docker.go:187] disabling docker service ...
	I0601 11:19:57.697632  276679 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0601 11:19:57.706476  276679 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0601 11:19:57.714826  276679 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0601 11:19:57.791919  276679 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0601 11:19:55.719582  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:19:58.219607  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:19:57.743387  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:20:00.243011  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:19:57.865357  276679 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0601 11:19:57.874183  276679 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0601 11:19:57.886120  276679 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*sandbox_image = .*$|sandbox_image = "k8s.gcr.io/pause:3.6"|' -i /etc/containerd/config.toml"
	I0601 11:19:57.893706  276679 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*restrict_oom_score_adj = .*$|restrict_oom_score_adj = false|' -i /etc/containerd/config.toml"
	I0601 11:19:57.901159  276679 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*SystemdCgroup = .*$|SystemdCgroup = false|' -i /etc/containerd/config.toml"
	I0601 11:19:57.908873  276679 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*conf_dir = .*$|conf_dir = "/etc/cni/net.mk"|' -i /etc/containerd/config.toml"
	I0601 11:19:57.916512  276679 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^# imports|imports = ["/etc/containerd/containerd.conf.d/02-containerd.conf"]|' -i /etc/containerd/config.toml"
	I0601 11:19:57.923712  276679 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc/containerd/containerd.conf.d && printf %!s(MISSING) "dmVyc2lvbiA9IDIK" | base64 -d | sudo tee /etc/containerd/containerd.conf.d/02-containerd.conf"
	I0601 11:19:57.935738  276679 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0601 11:19:57.941802  276679 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0601 11:19:57.947777  276679 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0601 11:19:58.021579  276679 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0601 11:19:58.089337  276679 start.go:447] Will wait 60s for socket path /run/containerd/containerd.sock
	I0601 11:19:58.089424  276679 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0601 11:19:58.092751  276679 start.go:468] Will wait 60s for crictl version
	I0601 11:19:58.092798  276679 ssh_runner.go:195] Run: sudo crictl version
	I0601 11:19:58.116611  276679 retry.go:31] will retry after 11.04660288s: Temporary Error: sudo crictl version: Process exited with status 1
	stdout:
	
	stderr:
	time="2022-06-01T11:19:58Z" level=fatal msg="getting the runtime version: rpc error: code = Unknown desc = server is not initialized yet"
	I0601 11:20:00.719494  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:20:03.219487  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:20:02.243060  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:20:04.243463  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:20:06.244423  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:20:05.719159  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:20:07.719735  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:20:09.163975  276679 ssh_runner.go:195] Run: sudo crictl version
	I0601 11:20:09.186613  276679 start.go:477] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.6.4
	RuntimeApiVersion:  v1alpha2
	I0601 11:20:09.186676  276679 ssh_runner.go:195] Run: containerd --version
	I0601 11:20:09.214385  276679 ssh_runner.go:195] Run: containerd --version
	I0601 11:20:09.243587  276679 out.go:177] * Preparing Kubernetes v1.23.6 on containerd 1.6.4 ...
	I0601 11:20:09.245245  276679 cli_runner.go:164] Run: docker network inspect default-k8s-different-port-20220601110654-6708 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0601 11:20:09.276501  276679 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0601 11:20:09.279800  276679 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0601 11:20:09.290992  276679 out.go:177]   - kubelet.cni-conf-dir=/etc/cni/net.mk
	I0601 11:20:08.742836  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:20:11.242670  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:20:09.292426  276679 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime containerd
	I0601 11:20:09.292493  276679 ssh_runner.go:195] Run: sudo crictl images --output json
	I0601 11:20:09.315170  276679 containerd.go:547] all images are preloaded for containerd runtime.
	I0601 11:20:09.315189  276679 containerd.go:461] Images already preloaded, skipping extraction
	I0601 11:20:09.315224  276679 ssh_runner.go:195] Run: sudo crictl images --output json
	I0601 11:20:09.338119  276679 containerd.go:547] all images are preloaded for containerd runtime.
	I0601 11:20:09.338137  276679 cache_images.go:84] Images are preloaded, skipping loading
	I0601 11:20:09.338184  276679 ssh_runner.go:195] Run: sudo crictl info
	I0601 11:20:09.360773  276679 cni.go:95] Creating CNI manager for ""
	I0601 11:20:09.360799  276679 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0601 11:20:09.360817  276679 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0601 11:20:09.360831  276679 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8444 KubernetesVersion:v1.23.6 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-different-port-20220601110654-6708 NodeName:default-k8s-different-port-20220601110654-6708 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.49.2
CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0601 11:20:09.360999  276679 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "default-k8s-different-port-20220601110654-6708"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.23.6
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0601 11:20:09.361105  276679 kubeadm.go:961] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.23.6/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cni-conf-dir=/etc/cni/net.mk --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=default-k8s-different-port-20220601110654-6708 --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.49.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.23.6 ClusterName:default-k8s-different-port-20220601110654-6708 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:}
	I0601 11:20:09.361162  276679 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.23.6
	I0601 11:20:09.368101  276679 binaries.go:44] Found k8s binaries, skipping transfer
	I0601 11:20:09.368169  276679 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0601 11:20:09.374382  276679 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (591 bytes)
	I0601 11:20:09.386282  276679 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0601 11:20:09.398188  276679 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2075 bytes)
	I0601 11:20:09.409736  276679 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0601 11:20:09.412458  276679 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0601 11:20:09.420789  276679 certs.go:54] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/default-k8s-different-port-20220601110654-6708 for IP: 192.168.49.2
	I0601 11:20:09.420897  276679 certs.go:182] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.key
	I0601 11:20:09.420940  276679 certs.go:182] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/proxy-client-ca.key
	I0601 11:20:09.421000  276679 certs.go:298] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/default-k8s-different-port-20220601110654-6708/client.key
	I0601 11:20:09.421053  276679 certs.go:298] skipping minikube signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/default-k8s-different-port-20220601110654-6708/apiserver.key.dd3b5fb2
	I0601 11:20:09.421088  276679 certs.go:298] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/default-k8s-different-port-20220601110654-6708/proxy-client.key
	I0601 11:20:09.421176  276679 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/6708.pem (1338 bytes)
	W0601 11:20:09.421205  276679 certs.go:384] ignoring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/6708_empty.pem, impossibly tiny 0 bytes
	I0601 11:20:09.421216  276679 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca-key.pem (1679 bytes)
	I0601 11:20:09.421244  276679 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca.pem (1082 bytes)
	I0601 11:20:09.421270  276679 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/cert.pem (1123 bytes)
	I0601 11:20:09.421298  276679 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/key.pem (1679 bytes)
	I0601 11:20:09.421334  276679 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files/etc/ssl/certs/67082.pem (1708 bytes)
	I0601 11:20:09.421917  276679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/default-k8s-different-port-20220601110654-6708/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0601 11:20:09.438490  276679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/default-k8s-different-port-20220601110654-6708/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0601 11:20:09.454711  276679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/default-k8s-different-port-20220601110654-6708/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0601 11:20:09.471469  276679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/default-k8s-different-port-20220601110654-6708/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0601 11:20:09.488271  276679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0601 11:20:09.504375  276679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0601 11:20:09.520473  276679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0601 11:20:09.536663  276679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0601 11:20:09.552725  276679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0601 11:20:09.568724  276679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/6708.pem --> /usr/share/ca-certificates/6708.pem (1338 bytes)
	I0601 11:20:09.584711  276679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files/etc/ssl/certs/67082.pem --> /usr/share/ca-certificates/67082.pem (1708 bytes)
	I0601 11:20:09.600406  276679 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0601 11:20:09.611814  276679 ssh_runner.go:195] Run: openssl version
	I0601 11:20:09.616280  276679 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0601 11:20:09.623058  276679 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0601 11:20:09.625881  276679 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Jun  1 10:20 /usr/share/ca-certificates/minikubeCA.pem
	I0601 11:20:09.625913  276679 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0601 11:20:09.630367  276679 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0601 11:20:09.636712  276679 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6708.pem && ln -fs /usr/share/ca-certificates/6708.pem /etc/ssl/certs/6708.pem"
	I0601 11:20:09.643407  276679 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6708.pem
	I0601 11:20:09.646316  276679 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Jun  1 10:24 /usr/share/ca-certificates/6708.pem
	I0601 11:20:09.646366  276679 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6708.pem
	I0601 11:20:09.650791  276679 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6708.pem /etc/ssl/certs/51391683.0"
	I0601 11:20:09.657126  276679 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/67082.pem && ln -fs /usr/share/ca-certificates/67082.pem /etc/ssl/certs/67082.pem"
	I0601 11:20:09.663990  276679 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/67082.pem
	I0601 11:20:09.666934  276679 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Jun  1 10:24 /usr/share/ca-certificates/67082.pem
	I0601 11:20:09.666966  276679 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/67082.pem
	I0601 11:20:09.671359  276679 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/67082.pem /etc/ssl/certs/3ec20f2e.0"
	I0601 11:20:09.677573  276679 kubeadm.go:395] StartCluster: {Name:default-k8s-different-port-20220601110654-6708 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:default-k8s-different-port-20220601110654-6708
Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8444 KubernetesVersion:v1.23.6 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.5.1@sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] S
tartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0601 11:20:09.677668  276679 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0601 11:20:09.677695  276679 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0601 11:20:09.700805  276679 cri.go:87] found id: "fec5fcb4aee4a111d94d448605edbdddd36bb33e1c2c06fad87be11f5157efdd"
	I0601 11:20:09.700825  276679 cri.go:87] found id: "313035e9674ffbf03bc7f81b4786fb43b0ffff7b5720ab0e88a6bb8f52d6087d"
	I0601 11:20:09.700835  276679 cri.go:87] found id: "f9746f111b56ac0244039e7b095ebf29999ccb20c1bf5e1ba484273bdb9e0d90"
	I0601 11:20:09.700844  276679 cri.go:87] found id: "0b15aeee4f5511a740a4f3f9ebbba9758b6b511b072da616c68a21c118c7790e"
	I0601 11:20:09.700853  276679 cri.go:87] found id: "627fd5c08820c1666f69a50ff3cc02a6eee73709048b46a0243143bf89dde787"
	I0601 11:20:09.700863  276679 cri.go:87] found id: "6ce85ae821e03fdd8bd07541eda8ea822ee62a21dc41c68bec58c5695d43fb44"
	I0601 11:20:09.700870  276679 cri.go:87] found id: ""
	I0601 11:20:09.700900  276679 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0601 11:20:09.711953  276679 cri.go:114] JSON = null
	W0601 11:20:09.711995  276679 kubeadm.go:402] unpause failed: list paused: list returned 0 containers, but ps returned 6
	I0601 11:20:09.712052  276679 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0601 11:20:09.718628  276679 kubeadm.go:410] found existing configuration files, will attempt cluster restart
	I0601 11:20:09.718649  276679 kubeadm.go:626] restartCluster start
	I0601 11:20:09.718687  276679 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0601 11:20:09.724992  276679 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0601 11:20:09.725567  276679 kubeconfig.go:116] verify returned: extract IP: "default-k8s-different-port-20220601110654-6708" does not appear in /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig
	I0601 11:20:09.725941  276679 kubeconfig.go:127] "default-k8s-different-port-20220601110654-6708" context is missing from /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig - will repair!
	I0601 11:20:09.726552  276679 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig: {Name:mkb0e16236e54fbc8651999f1dd70854c53de7db Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0601 11:20:09.727803  276679 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0601 11:20:09.734151  276679 api_server.go:165] Checking apiserver status ...
	I0601 11:20:09.734186  276679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 11:20:09.741699  276679 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 11:20:09.942065  276679 api_server.go:165] Checking apiserver status ...
	I0601 11:20:09.942125  276679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 11:20:09.950479  276679 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 11:20:10.142775  276679 api_server.go:165] Checking apiserver status ...
	I0601 11:20:10.142860  276679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 11:20:10.151184  276679 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 11:20:10.342428  276679 api_server.go:165] Checking apiserver status ...
	I0601 11:20:10.342511  276679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 11:20:10.350942  276679 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 11:20:10.542230  276679 api_server.go:165] Checking apiserver status ...
	I0601 11:20:10.542324  276679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 11:20:10.550731  276679 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 11:20:10.741765  276679 api_server.go:165] Checking apiserver status ...
	I0601 11:20:10.741840  276679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 11:20:10.750184  276679 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 11:20:10.942518  276679 api_server.go:165] Checking apiserver status ...
	I0601 11:20:10.942589  276679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 11:20:10.951137  276679 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 11:20:11.142442  276679 api_server.go:165] Checking apiserver status ...
	I0601 11:20:11.142519  276679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 11:20:11.151332  276679 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 11:20:11.342632  276679 api_server.go:165] Checking apiserver status ...
	I0601 11:20:11.342693  276679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 11:20:11.351149  276679 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 11:20:11.542423  276679 api_server.go:165] Checking apiserver status ...
	I0601 11:20:11.542483  276679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 11:20:11.550625  276679 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 11:20:11.741869  276679 api_server.go:165] Checking apiserver status ...
	I0601 11:20:11.741945  276679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 11:20:11.750554  276679 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 11:20:11.942776  276679 api_server.go:165] Checking apiserver status ...
	I0601 11:20:11.942855  276679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 11:20:11.951226  276679 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 11:20:12.142534  276679 api_server.go:165] Checking apiserver status ...
	I0601 11:20:12.142617  276679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 11:20:12.151065  276679 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 11:20:12.342354  276679 api_server.go:165] Checking apiserver status ...
	I0601 11:20:12.342429  276679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 11:20:12.350855  276679 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 11:20:12.542142  276679 api_server.go:165] Checking apiserver status ...
	I0601 11:20:12.542207  276679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 11:20:12.550615  276679 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 11:20:12.741824  276679 api_server.go:165] Checking apiserver status ...
	I0601 11:20:12.741894  276679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 11:20:12.750511  276679 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 11:20:12.750537  276679 api_server.go:165] Checking apiserver status ...
	I0601 11:20:12.750569  276679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 11:20:12.758099  276679 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 11:20:12.758124  276679 kubeadm.go:601] needs reconfigure: apiserver error: timed out waiting for the condition
	I0601 11:20:12.758131  276679 kubeadm.go:1092] stopping kube-system containers ...
	I0601 11:20:12.758146  276679 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I0601 11:20:12.758196  276679 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0601 11:20:12.782896  276679 cri.go:87] found id: "fec5fcb4aee4a111d94d448605edbdddd36bb33e1c2c06fad87be11f5157efdd"
	I0601 11:20:12.782918  276679 cri.go:87] found id: "313035e9674ffbf03bc7f81b4786fb43b0ffff7b5720ab0e88a6bb8f52d6087d"
	I0601 11:20:12.782924  276679 cri.go:87] found id: "f9746f111b56ac0244039e7b095ebf29999ccb20c1bf5e1ba484273bdb9e0d90"
	I0601 11:20:12.782931  276679 cri.go:87] found id: "0b15aeee4f5511a740a4f3f9ebbba9758b6b511b072da616c68a21c118c7790e"
	I0601 11:20:12.782936  276679 cri.go:87] found id: "627fd5c08820c1666f69a50ff3cc02a6eee73709048b46a0243143bf89dde787"
	I0601 11:20:12.782943  276679 cri.go:87] found id: "6ce85ae821e03fdd8bd07541eda8ea822ee62a21dc41c68bec58c5695d43fb44"
	I0601 11:20:12.782948  276679 cri.go:87] found id: ""
	I0601 11:20:12.782955  276679 cri.go:232] Stopping containers: [fec5fcb4aee4a111d94d448605edbdddd36bb33e1c2c06fad87be11f5157efdd 313035e9674ffbf03bc7f81b4786fb43b0ffff7b5720ab0e88a6bb8f52d6087d f9746f111b56ac0244039e7b095ebf29999ccb20c1bf5e1ba484273bdb9e0d90 0b15aeee4f5511a740a4f3f9ebbba9758b6b511b072da616c68a21c118c7790e 627fd5c08820c1666f69a50ff3cc02a6eee73709048b46a0243143bf89dde787 6ce85ae821e03fdd8bd07541eda8ea822ee62a21dc41c68bec58c5695d43fb44]
	I0601 11:20:12.782994  276679 ssh_runner.go:195] Run: which crictl
	I0601 11:20:12.785799  276679 ssh_runner.go:195] Run: sudo /usr/bin/crictl stop fec5fcb4aee4a111d94d448605edbdddd36bb33e1c2c06fad87be11f5157efdd 313035e9674ffbf03bc7f81b4786fb43b0ffff7b5720ab0e88a6bb8f52d6087d f9746f111b56ac0244039e7b095ebf29999ccb20c1bf5e1ba484273bdb9e0d90 0b15aeee4f5511a740a4f3f9ebbba9758b6b511b072da616c68a21c118c7790e 627fd5c08820c1666f69a50ff3cc02a6eee73709048b46a0243143bf89dde787 6ce85ae821e03fdd8bd07541eda8ea822ee62a21dc41c68bec58c5695d43fb44
	I0601 11:20:12.809504  276679 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0601 11:20:12.819061  276679 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0601 11:20:12.825913  276679 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5643 Jun  1 11:07 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5656 Jun  1 11:07 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2123 Jun  1 11:07 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5604 Jun  1 11:07 /etc/kubernetes/scheduler.conf
	
	I0601 11:20:12.825968  276679 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0601 11:20:10.219173  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:20:12.219371  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:20:13.243691  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:20:15.243798  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:20:12.832916  276679 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0601 11:20:12.839178  276679 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0601 11:20:12.845567  276679 kubeadm.go:166] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0601 11:20:12.845605  276679 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0601 11:20:12.851603  276679 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0601 11:20:12.857919  276679 kubeadm.go:166] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0601 11:20:12.857967  276679 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0601 11:20:12.864112  276679 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0601 11:20:12.870523  276679 kubeadm.go:703] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0601 11:20:12.870540  276679 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0601 11:20:12.912381  276679 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0601 11:20:13.433508  276679 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0601 11:20:13.566844  276679 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0601 11:20:13.617762  276679 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0601 11:20:13.686212  276679 api_server.go:51] waiting for apiserver process to appear ...
	I0601 11:20:13.686269  276679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:20:14.195273  276679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:20:14.695296  276679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:20:15.195457  276679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:20:15.695544  276679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:20:16.195542  276679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:20:16.695465  276679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:20:17.195333  276679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:20:17.694666  276679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:20:14.719337  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:20:17.218953  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:20:17.742741  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:20:20.244002  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:20:18.194692  276679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:20:18.694918  276679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:20:19.195623  276679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:20:19.695137  276679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:20:19.758656  276679 api_server.go:71] duration metric: took 6.072444993s to wait for apiserver process to appear ...
	I0601 11:20:19.758687  276679 api_server.go:87] waiting for apiserver healthz status ...
	I0601 11:20:19.758700  276679 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8444/healthz ...
	I0601 11:20:22.369047  276679 api_server.go:266] https://192.168.49.2:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0601 11:20:22.369078  276679 api_server.go:102] status: https://192.168.49.2:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0601 11:20:19.718920  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:20:21.719314  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:20:23.719804  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:20:22.869917  276679 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8444/healthz ...
	I0601 11:20:22.874561  276679 api_server.go:266] https://192.168.49.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0601 11:20:22.874589  276679 api_server.go:102] status: https://192.168.49.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0601 11:20:23.370203  276679 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8444/healthz ...
	I0601 11:20:23.375048  276679 api_server.go:266] https://192.168.49.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0601 11:20:23.375073  276679 api_server.go:102] status: https://192.168.49.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0601 11:20:23.869242  276679 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8444/healthz ...
	I0601 11:20:23.874012  276679 api_server.go:266] https://192.168.49.2:8444/healthz returned 200:
	ok
	I0601 11:20:23.879941  276679 api_server.go:140] control plane version: v1.23.6
	I0601 11:20:23.879963  276679 api_server.go:130] duration metric: took 4.121269797s to wait for apiserver health ...
	I0601 11:20:23.879972  276679 cni.go:95] Creating CNI manager for ""
	I0601 11:20:23.879977  276679 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0601 11:20:23.882052  276679 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0601 11:20:22.743507  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:20:25.242700  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:20:23.883460  276679 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0601 11:20:23.886921  276679 cni.go:189] applying CNI manifest using /var/lib/minikube/binaries/v1.23.6/kubectl ...
	I0601 11:20:23.886945  276679 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0601 11:20:23.899955  276679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0601 11:20:24.544438  276679 system_pods.go:43] waiting for kube-system pods to appear ...
	I0601 11:20:24.550979  276679 system_pods.go:59] 9 kube-system pods found
	I0601 11:20:24.551015  276679 system_pods.go:61] "coredns-64897985d-9gcj2" [28e98fca-a88b-422d-9f4b-797b18a8ff7a] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
	I0601 11:20:24.551025  276679 system_pods.go:61] "etcd-default-k8s-different-port-20220601110654-6708" [3005e651-1349-4d5e-b06f-e0fac3064ccf] Running
	I0601 11:20:24.551035  276679 system_pods.go:61] "kindnet-7fspq" [eefcd8e6-51e4-4d48-a420-93f4b47cf732] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0601 11:20:24.551042  276679 system_pods.go:61] "kube-apiserver-default-k8s-different-port-20220601110654-6708" [974fafdd-9176-4d97-acd7-9874d63b4987] Running
	I0601 11:20:24.551053  276679 system_pods.go:61] "kube-controller-manager-default-k8s-different-port-20220601110654-6708" [38b2c1a1-9a1a-4a1f-9fac-904e47d545be] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0601 11:20:24.551066  276679 system_pods.go:61] "kube-proxy-slzcl" [a1a6237f-6142-4e31-8bd4-72afd4f8a4c8] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0601 11:20:24.551083  276679 system_pods.go:61] "kube-scheduler-default-k8s-different-port-20220601110654-6708" [42ce6176-36e5-46bc-a443-19e4ca958785] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0601 11:20:24.551092  276679 system_pods.go:61] "metrics-server-b955d9d8-2k9wk" [fbc457b5-c359-4b84-abe5-d488874181f4] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
	I0601 11:20:24.551102  276679 system_pods.go:61] "storage-provisioner" [48086474-3417-47ff-970d-f7cf7806983b] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
	I0601 11:20:24.551112  276679 system_pods.go:74] duration metric: took 6.652373ms to wait for pod list to return data ...
	I0601 11:20:24.551126  276679 node_conditions.go:102] verifying NodePressure condition ...
	I0601 11:20:24.553819  276679 node_conditions.go:122] node storage ephemeral capacity is 304695084Ki
	I0601 11:20:24.553843  276679 node_conditions.go:123] node cpu capacity is 8
	I0601 11:20:24.553854  276679 node_conditions.go:105] duration metric: took 2.721044ms to run NodePressure ...
	I0601 11:20:24.553869  276679 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0601 11:20:24.680194  276679 kubeadm.go:762] waiting for restarted kubelet to initialise ...
	I0601 11:20:24.683686  276679 kubeadm.go:777] kubelet initialised
	I0601 11:20:24.683708  276679 kubeadm.go:778] duration metric: took 3.487172ms waiting for restarted kubelet to initialise ...
	I0601 11:20:24.683715  276679 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0601 11:20:24.689167  276679 pod_ready.go:78] waiting up to 4m0s for pod "coredns-64897985d-9gcj2" in "kube-system" namespace to be "Ready" ...
	I0601 11:20:26.694484  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:20:26.219205  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:20:28.219317  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:20:27.243486  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:20:29.742717  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:20:31.742800  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:20:28.695017  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:20:30.695110  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:20:32.695566  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:20:30.219646  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:20:32.719074  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:20:34.242643  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:20:36.243891  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:20:35.195305  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:20:37.197596  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:20:35.219473  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:20:37.719336  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:20:38.243963  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:20:40.743349  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:20:39.695270  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:20:42.195160  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:20:40.218932  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:20:42.719276  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:20:42.743398  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:20:45.243686  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:20:44.694661  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:20:46.695274  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:20:45.219350  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:20:47.719698  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:20:47.742813  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:20:50.244047  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:20:48.696514  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:20:51.195247  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:20:50.218967  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:20:52.219422  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:20:52.743394  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:20:54.743515  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:20:53.694370  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:20:55.694640  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:20:57.695171  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:20:54.719514  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:20:57.219033  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:20:57.242819  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:20:59.243739  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:20:59.739945  270029 pod_ready.go:81] duration metric: took 4m0.002166585s waiting for pod "coredns-64897985d-9dpfv" in "kube-system" namespace to be "Ready" ...
	E0601 11:20:59.739968  270029 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "coredns-64897985d-9dpfv" in "kube-system" namespace to be "Ready" (will not retry!)
	I0601 11:20:59.739995  270029 pod_ready.go:38] duration metric: took 4m0.008917217s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0601 11:20:59.740018  270029 kubeadm.go:630] restartCluster took 4m15.707393707s
	W0601 11:20:59.740131  270029 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0601 11:20:59.740156  270029 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I0601 11:21:01.430762  270029 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force": (1.690579833s)
	I0601 11:21:01.430838  270029 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0601 11:21:01.440364  270029 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0601 11:21:01.447145  270029 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0601 11:21:01.447194  270029 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0601 11:21:01.453852  270029 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0601 11:21:01.453891  270029 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0601 11:21:01.701224  270029 out.go:204]   - Generating certificates and keys ...
	I0601 11:21:00.194872  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:21:02.195437  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:20:59.219067  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:21:01.219719  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:21:03.719181  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:21:02.294583  270029 out.go:204]   - Booting up control plane ...
	I0601 11:21:04.694423  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:21:06.695087  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:21:05.719516  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:21:07.719966  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:21:09.195174  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:21:11.694583  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:21:10.218984  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:21:12.219075  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:21:14.337355  270029 out.go:204]   - Configuring RBAC rules ...
	I0601 11:21:14.750718  270029 cni.go:95] Creating CNI manager for ""
	I0601 11:21:14.750741  270029 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0601 11:21:14.752905  270029 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0601 11:21:14.754285  270029 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0601 11:21:14.758047  270029 cni.go:189] applying CNI manifest using /var/lib/minikube/binaries/v1.23.6/kubectl ...
	I0601 11:21:14.758065  270029 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0601 11:21:14.771201  270029 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0601 11:21:15.434277  270029 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0601 11:21:15.434380  270029 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:21:15.434381  270029 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl label nodes minikube.k8s.io/version=v1.26.0-beta.1 minikube.k8s.io/commit=4a356b2b7b41c6be3e1e342298908c27bb98ce92 minikube.k8s.io/name=embed-certs-20220601110327-6708 minikube.k8s.io/updated_at=2022_06_01T11_21_15_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:21:15.489119  270029 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:21:15.489208  270029 ops.go:34] apiserver oom_adj: -16
	I0601 11:21:16.079192  270029 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:21:16.579319  270029 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:21:14.194681  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:21:16.694557  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:21:14.219440  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:21:16.719363  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:21:17.079349  270029 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:21:17.579548  270029 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:21:18.079683  270029 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:21:18.579186  270029 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:21:19.079819  270029 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:21:19.579346  270029 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:21:20.079183  270029 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:21:20.579984  270029 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:21:21.079335  270029 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:21:21.579766  270029 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:21:18.694796  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:21:21.194627  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:21:19.218867  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:21:21.219185  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:21:23.719814  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:21:22.079321  270029 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:21:22.579993  270029 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:21:23.079856  270029 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:21:23.579743  270029 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:21:24.079256  270029 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:21:24.579276  270029 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:21:25.079828  270029 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:21:25.579763  270029 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:21:26.080068  270029 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:21:26.579388  270029 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:21:23.694527  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:21:25.694996  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:21:27.079269  270029 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:21:27.579729  270029 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:21:27.636171  270029 kubeadm.go:1045] duration metric: took 12.201851278s to wait for elevateKubeSystemPrivileges.
	I0601 11:21:27.636205  270029 kubeadm.go:397] StartCluster complete in 4m43.646757592s
	I0601 11:21:27.636227  270029 settings.go:142] acquiring lock: {Name:mk20a847233fa50399ab0a24280bffb8d8dbd41b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0601 11:21:27.636334  270029 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig
	I0601 11:21:27.637880  270029 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig: {Name:mkb0e16236e54fbc8651999f1dd70854c53de7db Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0601 11:21:28.157076  270029 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "embed-certs-20220601110327-6708" rescaled to 1
	I0601 11:21:28.157150  270029 start.go:208] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0601 11:21:28.157180  270029 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0601 11:21:28.159818  270029 out.go:177] * Verifying Kubernetes components...
	I0601 11:21:28.157185  270029 addons.go:415] enableAddons start: toEnable=map[dashboard:true metrics-server:true], additional=[]
	I0601 11:21:28.157406  270029 config.go:178] Loaded profile config "embed-certs-20220601110327-6708": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.6
	I0601 11:21:28.161484  270029 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0601 11:21:28.161496  270029 addons.go:65] Setting metrics-server=true in profile "embed-certs-20220601110327-6708"
	I0601 11:21:28.161511  270029 addons.go:153] Setting addon metrics-server=true in "embed-certs-20220601110327-6708"
	W0601 11:21:28.161523  270029 addons.go:165] addon metrics-server should already be in state true
	I0601 11:21:28.161537  270029 addons.go:65] Setting dashboard=true in profile "embed-certs-20220601110327-6708"
	I0601 11:21:28.161566  270029 addons.go:153] Setting addon dashboard=true in "embed-certs-20220601110327-6708"
	I0601 11:21:28.161573  270029 host.go:66] Checking if "embed-certs-20220601110327-6708" exists ...
	W0601 11:21:28.161579  270029 addons.go:165] addon dashboard should already be in state true
	I0601 11:21:28.161483  270029 addons.go:65] Setting storage-provisioner=true in profile "embed-certs-20220601110327-6708"
	I0601 11:21:28.161622  270029 addons.go:153] Setting addon storage-provisioner=true in "embed-certs-20220601110327-6708"
	W0601 11:21:28.161631  270029 addons.go:165] addon storage-provisioner should already be in state true
	I0601 11:21:28.161636  270029 host.go:66] Checking if "embed-certs-20220601110327-6708" exists ...
	I0601 11:21:28.161669  270029 host.go:66] Checking if "embed-certs-20220601110327-6708" exists ...
	I0601 11:21:28.161500  270029 addons.go:65] Setting default-storageclass=true in profile "embed-certs-20220601110327-6708"
	I0601 11:21:28.161709  270029 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-20220601110327-6708"
	I0601 11:21:28.161949  270029 cli_runner.go:164] Run: docker container inspect embed-certs-20220601110327-6708 --format={{.State.Status}}
	I0601 11:21:28.162094  270029 cli_runner.go:164] Run: docker container inspect embed-certs-20220601110327-6708 --format={{.State.Status}}
	I0601 11:21:28.162123  270029 cli_runner.go:164] Run: docker container inspect embed-certs-20220601110327-6708 --format={{.State.Status}}
	I0601 11:21:28.162229  270029 cli_runner.go:164] Run: docker container inspect embed-certs-20220601110327-6708 --format={{.State.Status}}
	I0601 11:21:28.209663  270029 out.go:177]   - Using image k8s.gcr.io/echoserver:1.4
	I0601 11:21:28.211523  270029 out.go:177]   - Using image kubernetesui/dashboard:v2.5.1
	I0601 11:21:28.213009  270029 addons.go:348] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0601 11:21:28.213030  270029 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0601 11:21:28.213079  270029 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220601110327-6708
	I0601 11:21:28.216922  270029 out.go:177]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I0601 11:21:28.218989  270029 addons.go:348] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0601 11:21:28.217201  270029 addons.go:153] Setting addon default-storageclass=true in "embed-certs-20220601110327-6708"
	W0601 11:21:28.219035  270029 addons.go:165] addon default-storageclass should already be in state true
	I0601 11:21:28.219075  270029 host.go:66] Checking if "embed-certs-20220601110327-6708" exists ...
	I0601 11:21:28.219579  270029 cli_runner.go:164] Run: docker container inspect embed-certs-20220601110327-6708 --format={{.State.Status}}
	I0601 11:21:28.219012  270029 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0601 11:21:28.219781  270029 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220601110327-6708
	I0601 11:21:28.236451  270029 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0601 11:21:26.218905  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:21:28.219209  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:21:28.238138  270029 addons.go:348] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0601 11:21:28.238163  270029 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0601 11:21:28.238217  270029 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220601110327-6708
	I0601 11:21:28.246850  270029 node_ready.go:35] waiting up to 6m0s for node "embed-certs-20220601110327-6708" to be "Ready" ...
	I0601 11:21:28.246885  270029 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0601 11:21:28.273680  270029 addons.go:348] installing /etc/kubernetes/addons/storageclass.yaml
	I0601 11:21:28.273707  270029 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0601 11:21:28.273761  270029 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220601110327-6708
	I0601 11:21:28.278846  270029 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49437 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/embed-certs-20220601110327-6708/id_rsa Username:docker}
	I0601 11:21:28.279320  270029 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49437 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/embed-certs-20220601110327-6708/id_rsa Username:docker}
	I0601 11:21:28.286384  270029 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49437 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/embed-certs-20220601110327-6708/id_rsa Username:docker}
	I0601 11:21:28.321729  270029 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49437 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/embed-certs-20220601110327-6708/id_rsa Username:docker}
	I0601 11:21:28.455756  270029 addons.go:348] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0601 11:21:28.455785  270029 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1849 bytes)
	I0601 11:21:28.466348  270029 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0601 11:21:28.469026  270029 addons.go:348] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0601 11:21:28.469067  270029 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0601 11:21:28.469486  270029 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0601 11:21:28.478076  270029 addons.go:348] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0601 11:21:28.478099  270029 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0601 11:21:28.487008  270029 addons.go:348] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0601 11:21:28.487036  270029 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0601 11:21:28.573106  270029 addons.go:348] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0601 11:21:28.573135  270029 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0601 11:21:28.574698  270029 start.go:806] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS
	I0601 11:21:28.577019  270029 addons.go:348] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0601 11:21:28.577042  270029 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0601 11:21:28.653936  270029 addons.go:348] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0601 11:21:28.653967  270029 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4196 bytes)
	I0601 11:21:28.658482  270029 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0601 11:21:28.671762  270029 addons.go:348] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0601 11:21:28.671808  270029 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0601 11:21:28.758424  270029 addons.go:348] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0601 11:21:28.758516  270029 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0601 11:21:28.776703  270029 addons.go:348] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0601 11:21:28.776735  270029 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0601 11:21:28.794636  270029 addons.go:348] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0601 11:21:28.794670  270029 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0601 11:21:28.959418  270029 addons.go:348] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0601 11:21:28.959449  270029 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0601 11:21:28.976465  270029 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0601 11:21:29.354605  270029 addons.go:386] Verifying addon metrics-server=true in "embed-certs-20220601110327-6708"
	I0601 11:21:29.699561  270029 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server, dashboard
	I0601 11:21:29.700807  270029 addons.go:417] enableAddons completed in 1.543631535s
	I0601 11:21:30.260215  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:21:28.196140  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:21:30.694688  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:21:32.695236  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:21:30.219534  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:21:32.219685  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:21:32.260412  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:21:34.760173  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:21:36.760442  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:21:35.195034  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:21:37.195304  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:21:34.718805  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:21:36.719108  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:21:38.760533  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:21:40.761060  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:21:39.694703  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:21:42.195994  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:21:39.219402  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:21:41.718982  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:21:43.719227  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:21:43.259684  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:21:45.260363  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:21:45.719329  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:21:47.719480  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:21:47.721505  254820 node_ready.go:38] duration metric: took 4m0.008123732s waiting for node "old-k8s-version-20220601105850-6708" to be "Ready" ...
	I0601 11:21:47.723918  254820 out.go:177] 
	W0601 11:21:47.725406  254820 out.go:239] X Exiting due to GUEST_START: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: timed out waiting for the condition
	W0601 11:21:47.725423  254820 out.go:239] * 
	W0601 11:21:47.726098  254820 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0601 11:21:47.728001  254820 out.go:177] 
	I0601 11:21:44.695306  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:21:47.194624  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:21:47.760960  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:21:50.260784  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:21:49.195368  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:21:51.694946  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:21:52.760281  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:21:55.259912  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:21:54.194912  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:21:56.195652  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:21:57.259956  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:21:59.759755  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:22:01.759853  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:21:58.694995  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:22:01.194431  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:22:03.760721  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:22:06.260069  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:22:03.195297  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:22:05.694312  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:22:07.695082  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:22:08.260739  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:22:10.760237  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:22:10.194760  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:22:12.194885  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:22:13.259813  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:22:15.260153  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:22:14.195226  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:22:16.694528  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:22:17.260859  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:22:19.759997  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:22:21.760654  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:22:18.695235  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:22:21.194694  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:22:24.260433  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:22:26.760129  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:22:23.197530  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:22:25.695229  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:22:28.760717  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:22:31.260368  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:22:28.194771  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:22:30.195026  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:22:32.694696  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:22:33.760112  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:22:35.760758  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:22:34.694930  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:22:36.695375  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:22:38.260723  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:22:40.760393  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:22:39.194795  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:22:41.694750  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:22:43.259823  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:22:45.260551  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:22:44.195389  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:22:46.695489  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:22:47.760311  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:22:49.760404  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:22:49.194395  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:22:51.195245  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:22:52.260594  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:22:54.760044  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:22:56.760073  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:22:53.195327  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:22:55.694893  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:22:58.760157  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:23:01.260267  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:22:58.194547  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:23:00.694762  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:23:03.260561  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:23:05.260780  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:23:03.195176  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:23:05.694698  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:23:07.695208  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:23:07.760513  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:23:10.260326  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:23:10.195039  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:23:12.695240  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:23:12.260674  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:23:14.260918  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:23:16.760064  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:23:15.195155  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:23:17.195241  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:23:18.760686  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:23:21.260676  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:23:19.694620  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:23:21.694667  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:23:23.760024  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:23:26.259746  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:23:24.194510  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:23:26.194546  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:23:28.260714  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:23:30.760541  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:23:28.194917  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:23:30.694766  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:23:33.260035  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:23:35.261060  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:23:33.195328  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:23:35.694682  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:23:37.695340  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:23:37.760144  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:23:40.260334  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:23:40.194751  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:23:42.194853  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:23:42.759808  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:23:44.759997  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:23:46.760285  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:23:44.695010  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:23:46.695526  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:23:48.760374  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:23:51.260999  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:23:49.194307  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:23:51.195053  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:23:53.760587  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:23:56.260172  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:23:53.195339  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:23:55.695153  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:23:58.759799  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:24:00.760631  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:23:58.194738  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:24:00.195407  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:24:02.695048  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:24:03.260687  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:24:05.260722  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:24:04.695337  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:24:07.194665  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:24:07.760567  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:24:10.260596  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:24:09.195069  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:24:11.694328  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:24:12.260967  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:24:14.759793  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:24:16.760292  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:24:14.194996  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:24:16.694542  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:24:18.760531  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:24:20.760689  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:24:18.694668  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:24:20.695051  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:24:23.195952  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:24:24.691928  276679 pod_ready.go:81] duration metric: took 4m0.002724634s waiting for pod "coredns-64897985d-9gcj2" in "kube-system" namespace to be "Ready" ...
	E0601 11:24:24.691955  276679 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "coredns-64897985d-9gcj2" in "kube-system" namespace to be "Ready" (will not retry!)
	I0601 11:24:24.691981  276679 pod_ready.go:38] duration metric: took 4m0.008258762s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0601 11:24:24.692005  276679 kubeadm.go:630] restartCluster took 4m14.973349857s
	W0601 11:24:24.692130  276679 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0601 11:24:24.692159  276679 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I0601 11:24:26.286416  276679 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force": (1.594228976s)
	I0601 11:24:26.286489  276679 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0601 11:24:26.296314  276679 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0601 11:24:26.303059  276679 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0601 11:24:26.303116  276679 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0601 11:24:26.309917  276679 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0601 11:24:26.309957  276679 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0601 11:24:22.761011  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:24:25.261206  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:24:26.556270  276679 out.go:204]   - Generating certificates and keys ...
	I0601 11:24:27.302083  276679 out.go:204]   - Booting up control plane ...
	I0601 11:24:27.261441  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:24:29.759885  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:24:32.260145  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:24:34.260990  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:24:36.760710  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:24:38.840585  276679 out.go:204]   - Configuring RBAC rules ...
	I0601 11:24:39.253770  276679 cni.go:95] Creating CNI manager for ""
	I0601 11:24:39.253791  276679 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0601 11:24:39.255739  276679 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0601 11:24:39.259837  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:24:41.260124  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:24:39.257207  276679 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0601 11:24:39.261207  276679 cni.go:189] applying CNI manifest using /var/lib/minikube/binaries/v1.23.6/kubectl ...
	I0601 11:24:39.261228  276679 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0601 11:24:39.273744  276679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0601 11:24:39.861493  276679 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0601 11:24:39.861573  276679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:24:39.861574  276679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl label nodes minikube.k8s.io/version=v1.26.0-beta.1 minikube.k8s.io/commit=4a356b2b7b41c6be3e1e342298908c27bb98ce92 minikube.k8s.io/name=default-k8s-different-port-20220601110654-6708 minikube.k8s.io/updated_at=2022_06_01T11_24_39_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:24:39.914842  276679 ops.go:34] apiserver oom_adj: -16
	I0601 11:24:39.914913  276679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:24:40.498901  276679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:24:40.998931  276679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:24:41.499031  276679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:24:41.998593  276679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:24:42.499160  276679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:24:43.260473  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:24:45.760870  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:24:42.998966  276679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:24:43.498638  276679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:24:43.998319  276679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:24:44.498531  276679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:24:44.998678  276679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:24:45.499193  276679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:24:45.998418  276679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:24:46.498985  276679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:24:46.998941  276679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:24:47.498945  276679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:24:48.260450  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:24:50.260933  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:24:47.999272  276679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:24:48.498439  276679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:24:48.999292  276679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:24:49.499272  276679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:24:49.998339  276679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:24:50.498332  276679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:24:50.999106  276679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:24:51.499296  276679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:24:51.998980  276679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:24:52.498623  276679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:24:52.998371  276679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:24:53.498515  276679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:24:53.594790  276679 kubeadm.go:1045] duration metric: took 13.733266896s to wait for elevateKubeSystemPrivileges.
	I0601 11:24:53.594820  276679 kubeadm.go:397] StartCluster complete in 4m43.917251881s
	I0601 11:24:53.594841  276679 settings.go:142] acquiring lock: {Name:mk20a847233fa50399ab0a24280bffb8d8dbd41b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0601 11:24:53.594938  276679 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig
	I0601 11:24:53.596907  276679 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig: {Name:mkb0e16236e54fbc8651999f1dd70854c53de7db Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0601 11:24:54.111475  276679 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "default-k8s-different-port-20220601110654-6708" rescaled to 1
	I0601 11:24:54.111547  276679 start.go:208] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8444 KubernetesVersion:v1.23.6 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0601 11:24:54.113711  276679 out.go:177] * Verifying Kubernetes components...
	I0601 11:24:54.111604  276679 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0601 11:24:54.111644  276679 addons.go:415] enableAddons start: toEnable=map[dashboard:true metrics-server:true], additional=[]
	I0601 11:24:54.111802  276679 config.go:178] Loaded profile config "default-k8s-different-port-20220601110654-6708": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.6
	I0601 11:24:54.115020  276679 addons.go:65] Setting storage-provisioner=true in profile "default-k8s-different-port-20220601110654-6708"
	I0601 11:24:54.115035  276679 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0601 11:24:54.115035  276679 addons.go:65] Setting metrics-server=true in profile "default-k8s-different-port-20220601110654-6708"
	I0601 11:24:54.115048  276679 addons.go:153] Setting addon storage-provisioner=true in "default-k8s-different-port-20220601110654-6708"
	I0601 11:24:54.115055  276679 addons.go:153] Setting addon metrics-server=true in "default-k8s-different-port-20220601110654-6708"
	W0601 11:24:54.115057  276679 addons.go:165] addon storage-provisioner should already be in state true
	W0601 11:24:54.115064  276679 addons.go:165] addon metrics-server should already be in state true
	I0601 11:24:54.115034  276679 addons.go:65] Setting default-storageclass=true in profile "default-k8s-different-port-20220601110654-6708"
	I0601 11:24:54.115103  276679 host.go:66] Checking if "default-k8s-different-port-20220601110654-6708" exists ...
	I0601 11:24:54.115109  276679 host.go:66] Checking if "default-k8s-different-port-20220601110654-6708" exists ...
	I0601 11:24:54.115112  276679 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-different-port-20220601110654-6708"
	I0601 11:24:54.115037  276679 addons.go:65] Setting dashboard=true in profile "default-k8s-different-port-20220601110654-6708"
	I0601 11:24:54.115134  276679 addons.go:153] Setting addon dashboard=true in "default-k8s-different-port-20220601110654-6708"
	W0601 11:24:54.115144  276679 addons.go:165] addon dashboard should already be in state true
	I0601 11:24:54.115176  276679 host.go:66] Checking if "default-k8s-different-port-20220601110654-6708" exists ...
	I0601 11:24:54.115416  276679 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220601110654-6708 --format={{.State.Status}}
	I0601 11:24:54.115596  276679 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220601110654-6708 --format={{.State.Status}}
	I0601 11:24:54.115611  276679 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220601110654-6708 --format={{.State.Status}}
	I0601 11:24:54.115615  276679 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220601110654-6708 --format={{.State.Status}}
	I0601 11:24:54.129176  276679 node_ready.go:35] waiting up to 6m0s for node "default-k8s-different-port-20220601110654-6708" to be "Ready" ...
	I0601 11:24:54.168194  276679 out.go:177]   - Using image kubernetesui/dashboard:v2.5.1
	I0601 11:24:54.169714  276679 out.go:177]   - Using image k8s.gcr.io/echoserver:1.4
	I0601 11:24:54.171144  276679 addons.go:348] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0601 11:24:54.170891  276679 addons.go:153] Setting addon default-storageclass=true in "default-k8s-different-port-20220601110654-6708"
	W0601 11:24:54.171181  276679 addons.go:165] addon default-storageclass should already be in state true
	I0601 11:24:54.171211  276679 host.go:66] Checking if "default-k8s-different-port-20220601110654-6708" exists ...
	I0601 11:24:54.171167  276679 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0601 11:24:54.171329  276679 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220601110654-6708
	I0601 11:24:54.171684  276679 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220601110654-6708 --format={{.State.Status}}
	I0601 11:24:54.176157  276679 out.go:177]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I0601 11:24:54.177770  276679 addons.go:348] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0601 11:24:54.177796  276679 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0601 11:24:54.179131  276679 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0601 11:24:54.177859  276679 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220601110654-6708
	I0601 11:24:54.180787  276679 addons.go:348] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0601 11:24:54.180809  276679 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0601 11:24:54.180855  276679 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220601110654-6708
	I0601 11:24:54.233206  276679 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49442 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/default-k8s-different-port-20220601110654-6708/id_rsa Username:docker}
	I0601 11:24:54.240234  276679 addons.go:348] installing /etc/kubernetes/addons/storageclass.yaml
	I0601 11:24:54.240263  276679 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0601 11:24:54.240311  276679 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220601110654-6708
	I0601 11:24:54.240743  276679 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49442 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/default-k8s-different-port-20220601110654-6708/id_rsa Username:docker}
	I0601 11:24:54.242497  276679 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49442 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/default-k8s-different-port-20220601110654-6708/id_rsa Username:docker}
	I0601 11:24:54.255476  276679 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0601 11:24:54.289597  276679 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49442 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/default-k8s-different-port-20220601110654-6708/id_rsa Username:docker}
	I0601 11:24:54.510589  276679 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0601 11:24:54.510747  276679 addons.go:348] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0601 11:24:54.510770  276679 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1849 bytes)
	I0601 11:24:54.556919  276679 addons.go:348] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0601 11:24:54.556950  276679 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0601 11:24:54.566012  276679 addons.go:348] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0601 11:24:54.566042  276679 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0601 11:24:54.569528  276679 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0601 11:24:54.576575  276679 addons.go:348] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0601 11:24:54.576625  276679 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0601 11:24:54.654525  276679 addons.go:348] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0601 11:24:54.654551  276679 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0601 11:24:54.655296  276679 addons.go:348] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0601 11:24:54.655319  276679 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0601 11:24:54.661290  276679 start.go:806] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS
	I0601 11:24:54.671592  276679 addons.go:348] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0601 11:24:54.671621  276679 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4196 bytes)
	I0601 11:24:54.673696  276679 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0601 11:24:54.687107  276679 addons.go:348] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0601 11:24:54.687133  276679 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0601 11:24:54.768961  276679 addons.go:348] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0601 11:24:54.768989  276679 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0601 11:24:54.854363  276679 addons.go:348] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0601 11:24:54.854399  276679 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0601 11:24:54.870735  276679 addons.go:348] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0601 11:24:54.870762  276679 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0601 11:24:54.888031  276679 addons.go:348] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0601 11:24:54.888063  276679 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0601 11:24:54.967082  276679 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0601 11:24:55.273650  276679 addons.go:386] Verifying addon metrics-server=true in "default-k8s-different-port-20220601110654-6708"
	I0601 11:24:55.661065  276679 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I0601 11:24:52.261071  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:24:54.261578  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:24:56.760078  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:24:55.662561  276679 addons.go:417] enableAddons completed in 1.550935677s
	I0601 11:24:56.136034  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:24:58.760245  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:25:00.760344  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:24:58.136131  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:25:00.136759  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:25:02.636409  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:25:03.260144  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:25:05.260531  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:25:05.136779  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:25:07.635969  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:25:07.760027  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:25:09.760904  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:25:10.136336  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:25:12.636564  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:25:12.260100  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:25:14.759992  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:25:16.760260  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:25:14.636694  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:25:17.137058  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:25:19.260136  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:25:21.260700  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:25:19.636331  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:25:22.136010  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:25:23.760875  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:25:26.261082  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:25:24.136501  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:25:26.636646  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:25:28.263320  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:25:28.263343  270029 node_ready.go:38] duration metric: took 4m0.016466534s waiting for node "embed-certs-20220601110327-6708" to be "Ready" ...
	I0601 11:25:28.265930  270029 out.go:177] 
	W0601 11:25:28.267524  270029 out.go:239] X Exiting due to GUEST_START: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: timed out waiting for the condition
	W0601 11:25:28.267549  270029 out.go:239] * 
	W0601 11:25:28.268404  270029 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0601 11:25:28.269962  270029 out.go:177] 
	I0601 11:25:28.637161  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:25:31.135894  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:25:33.136655  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:25:35.635923  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:25:37.636131  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:25:39.636319  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:25:42.136004  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:25:44.136847  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:25:46.636704  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:25:49.136203  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:25:51.136808  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:25:53.636402  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:25:56.135580  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:25:58.135934  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:26:00.136698  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:26:02.136807  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:26:04.636360  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:26:07.136003  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:26:09.136403  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:26:11.636023  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:26:13.636284  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:26:16.136059  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:26:18.635976  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:26:20.636471  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:26:23.136420  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:26:25.635898  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:26:27.636092  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:26:29.636223  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:26:32.135814  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:26:34.136208  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:26:36.136320  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:26:38.635965  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:26:41.136884  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:26:43.636083  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:26:46.136237  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:26:48.635722  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:26:51.135780  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:26:53.136057  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:26:55.136925  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:26:57.636578  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:27:00.135989  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:27:02.136086  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:27:04.136153  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:27:06.635746  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:27:08.636054  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:27:10.636582  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:27:13.136118  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:27:15.137042  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:27:17.636192  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:27:20.136181  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:27:22.136256  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:27:24.136756  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:27:26.636114  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:27:28.636414  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:27:31.136248  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:27:33.136847  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:27:35.635813  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:27:37.636126  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:27:39.636375  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:27:42.136175  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:27:44.636682  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:27:47.135843  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:27:49.136252  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:27:51.137073  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:27:53.636035  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:27:55.636279  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:27:58.136943  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:28:00.635664  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:28:02.636502  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:28:04.638145  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:28:07.136842  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:28:09.636372  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:28:12.136048  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:28:14.136569  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:28:16.635705  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:28:18.636532  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:28:21.136177  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:28:23.636753  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:28:26.136524  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:28:28.635691  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:28:30.636561  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:28:33.136478  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:28:35.636196  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:28:38.137078  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:28:40.636164  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:28:42.636749  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:28:45.136427  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:28:47.636180  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:28:49.636861  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:28:52.136563  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:28:54.136714  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:28:54.138823  276679 node_ready.go:38] duration metric: took 4m0.0096115s waiting for node "default-k8s-different-port-20220601110654-6708" to be "Ready" ...
	I0601 11:28:54.141397  276679 out.go:177] 
	W0601 11:28:54.143025  276679 out.go:239] X Exiting due to GUEST_START: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: timed out waiting for the condition
	W0601 11:28:54.143041  276679 out.go:239] * 
	W0601 11:28:54.143750  276679 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0601 11:28:54.145729  276679 out.go:177] 
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID
	8135d6d8eeb51       6de166512aa22       About a minute ago   Running             kindnet-cni               4                   4390368520877
	aeebcabcf96a8       6de166512aa22       4 minutes ago        Exited              kindnet-cni               3                   4390368520877
	bba155e14e8f3       c21b0c7400f98       13 minutes ago       Running             kube-proxy                0                   2757059ae300e
	0347453bb77d9       06a629a7e51cd       13 minutes ago       Running             kube-controller-manager   0                   f76ee23e41e32
	c6dd696a23428       b305571ca60a5       13 minutes ago       Running             kube-apiserver            0                   f2e3ad18f3af9
	a946b8ec63ccd       301ddc62b80b1       13 minutes ago       Running             kube-scheduler            0                   b9bd728b9dde4
	c7d9c76499959       b2756210eeabf       13 minutes ago       Running             etcd                      0                   acf2412deefa0
	
	* 
	* ==> containerd <==
	* -- Logs begin at Wed 2022-06-01 11:11:54 UTC, end at Wed 2022-06-01 11:30:51 UTC. --
	Jun 01 11:23:09 old-k8s-version-20220601105850-6708 containerd[390]: time="2022-06-01T11:23:09.674222115Z" level=info msg="RemoveContainer for \"310e21ce9d14163f7fa71a73d3372ad19670ad2c2044e502fc7e639d02e04aa5\" returns successfully"
	Jun 01 11:23:20 old-k8s-version-20220601105850-6708 containerd[390]: time="2022-06-01T11:23:20.026127906Z" level=info msg="CreateContainer within sandbox \"43903685208773e46ae9179be445fb4b8907c2aeefa84be65aa89e4065b739f4\" for container &ContainerMetadata{Name:kindnet-cni,Attempt:2,}"
	Jun 01 11:23:20 old-k8s-version-20220601105850-6708 containerd[390]: time="2022-06-01T11:23:20.038668862Z" level=info msg="CreateContainer within sandbox \"43903685208773e46ae9179be445fb4b8907c2aeefa84be65aa89e4065b739f4\" for &ContainerMetadata{Name:kindnet-cni,Attempt:2,} returns container id \"25ba8a177ef246206d35b30ab5e2073fd95af0110ccc82b6f4b96a55108809c6\""
	Jun 01 11:23:20 old-k8s-version-20220601105850-6708 containerd[390]: time="2022-06-01T11:23:20.039169423Z" level=info msg="StartContainer for \"25ba8a177ef246206d35b30ab5e2073fd95af0110ccc82b6f4b96a55108809c6\""
	Jun 01 11:23:20 old-k8s-version-20220601105850-6708 containerd[390]: time="2022-06-01T11:23:20.168880505Z" level=info msg="StartContainer for \"25ba8a177ef246206d35b30ab5e2073fd95af0110ccc82b6f4b96a55108809c6\" returns successfully"
	Jun 01 11:26:00 old-k8s-version-20220601105850-6708 containerd[390]: time="2022-06-01T11:26:00.391391126Z" level=info msg="shim disconnected" id=25ba8a177ef246206d35b30ab5e2073fd95af0110ccc82b6f4b96a55108809c6
	Jun 01 11:26:00 old-k8s-version-20220601105850-6708 containerd[390]: time="2022-06-01T11:26:00.391453434Z" level=warning msg="cleaning up after shim disconnected" id=25ba8a177ef246206d35b30ab5e2073fd95af0110ccc82b6f4b96a55108809c6 namespace=k8s.io
	Jun 01 11:26:00 old-k8s-version-20220601105850-6708 containerd[390]: time="2022-06-01T11:26:00.391466979Z" level=info msg="cleaning up dead shim"
	Jun 01 11:26:00 old-k8s-version-20220601105850-6708 containerd[390]: time="2022-06-01T11:26:00.400646046Z" level=warning msg="cleanup warnings time=\"2022-06-01T11:26:00Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=5765 runtime=io.containerd.runc.v2\n"
	Jun 01 11:26:00 old-k8s-version-20220601105850-6708 containerd[390]: time="2022-06-01T11:26:00.914153981Z" level=info msg="RemoveContainer for \"474a26b35c18b4257bbdf87dafc02876c3cbe21ebd72bf6427072e27c0acb83b\""
	Jun 01 11:26:00 old-k8s-version-20220601105850-6708 containerd[390]: time="2022-06-01T11:26:00.919107534Z" level=info msg="RemoveContainer for \"474a26b35c18b4257bbdf87dafc02876c3cbe21ebd72bf6427072e27c0acb83b\" returns successfully"
	Jun 01 11:26:28 old-k8s-version-20220601105850-6708 containerd[390]: time="2022-06-01T11:26:28.025889460Z" level=info msg="CreateContainer within sandbox \"43903685208773e46ae9179be445fb4b8907c2aeefa84be65aa89e4065b739f4\" for container &ContainerMetadata{Name:kindnet-cni,Attempt:3,}"
	Jun 01 11:26:28 old-k8s-version-20220601105850-6708 containerd[390]: time="2022-06-01T11:26:28.037622789Z" level=info msg="CreateContainer within sandbox \"43903685208773e46ae9179be445fb4b8907c2aeefa84be65aa89e4065b739f4\" for &ContainerMetadata{Name:kindnet-cni,Attempt:3,} returns container id \"aeebcabcf96a801852b126a414cee59934be74087980720f9cbedfb3c41eb3f8\""
	Jun 01 11:26:28 old-k8s-version-20220601105850-6708 containerd[390]: time="2022-06-01T11:26:28.038096264Z" level=info msg="StartContainer for \"aeebcabcf96a801852b126a414cee59934be74087980720f9cbedfb3c41eb3f8\""
	Jun 01 11:26:28 old-k8s-version-20220601105850-6708 containerd[390]: time="2022-06-01T11:26:28.157391883Z" level=info msg="StartContainer for \"aeebcabcf96a801852b126a414cee59934be74087980720f9cbedfb3c41eb3f8\" returns successfully"
	Jun 01 11:29:08 old-k8s-version-20220601105850-6708 containerd[390]: time="2022-06-01T11:29:08.393918895Z" level=info msg="shim disconnected" id=aeebcabcf96a801852b126a414cee59934be74087980720f9cbedfb3c41eb3f8
	Jun 01 11:29:08 old-k8s-version-20220601105850-6708 containerd[390]: time="2022-06-01T11:29:08.393971872Z" level=warning msg="cleaning up after shim disconnected" id=aeebcabcf96a801852b126a414cee59934be74087980720f9cbedfb3c41eb3f8 namespace=k8s.io
	Jun 01 11:29:08 old-k8s-version-20220601105850-6708 containerd[390]: time="2022-06-01T11:29:08.393984035Z" level=info msg="cleaning up dead shim"
	Jun 01 11:29:08 old-k8s-version-20220601105850-6708 containerd[390]: time="2022-06-01T11:29:08.403320788Z" level=warning msg="cleanup warnings time=\"2022-06-01T11:29:08Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=6222 runtime=io.containerd.runc.v2\n"
	Jun 01 11:29:09 old-k8s-version-20220601105850-6708 containerd[390]: time="2022-06-01T11:29:09.171204526Z" level=info msg="RemoveContainer for \"25ba8a177ef246206d35b30ab5e2073fd95af0110ccc82b6f4b96a55108809c6\""
	Jun 01 11:29:09 old-k8s-version-20220601105850-6708 containerd[390]: time="2022-06-01T11:29:09.176023174Z" level=info msg="RemoveContainer for \"25ba8a177ef246206d35b30ab5e2073fd95af0110ccc82b6f4b96a55108809c6\" returns successfully"
	Jun 01 11:29:50 old-k8s-version-20220601105850-6708 containerd[390]: time="2022-06-01T11:29:50.025845651Z" level=info msg="CreateContainer within sandbox \"43903685208773e46ae9179be445fb4b8907c2aeefa84be65aa89e4065b739f4\" for container &ContainerMetadata{Name:kindnet-cni,Attempt:4,}"
	Jun 01 11:29:50 old-k8s-version-20220601105850-6708 containerd[390]: time="2022-06-01T11:29:50.038842704Z" level=info msg="CreateContainer within sandbox \"43903685208773e46ae9179be445fb4b8907c2aeefa84be65aa89e4065b739f4\" for &ContainerMetadata{Name:kindnet-cni,Attempt:4,} returns container id \"8135d6d8eeb514619f724ccea2076316b39f2b6ee3c0114d17e0c0624e474833\""
	Jun 01 11:29:50 old-k8s-version-20220601105850-6708 containerd[390]: time="2022-06-01T11:29:50.039334440Z" level=info msg="StartContainer for \"8135d6d8eeb514619f724ccea2076316b39f2b6ee3c0114d17e0c0624e474833\""
	Jun 01 11:29:50 old-k8s-version-20220601105850-6708 containerd[390]: time="2022-06-01T11:29:50.257984457Z" level=info msg="StartContainer for \"8135d6d8eeb514619f724ccea2076316b39f2b6ee3c0114d17e0c0624e474833\" returns successfully"
	
	* 
	* ==> describe nodes <==
	* Name:               old-k8s-version-20220601105850-6708
	Roles:              master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-20220601105850-6708
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=4a356b2b7b41c6be3e1e342298908c27bb98ce92
	                    minikube.k8s.io/name=old-k8s-version-20220601105850-6708
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2022_06_01T11_17_32_0700
	                    minikube.k8s.io/version=v1.26.0-beta.1
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 01 Jun 2022 11:17:27 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 01 Jun 2022 11:30:27 +0000   Wed, 01 Jun 2022 11:17:24 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 01 Jun 2022 11:30:27 +0000   Wed, 01 Jun 2022 11:17:24 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 01 Jun 2022 11:30:27 +0000   Wed, 01 Jun 2022 11:17:24 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Wed, 01 Jun 2022 11:30:27 +0000   Wed, 01 Jun 2022 11:17:24 +0000   KubeletNotReady              runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Addresses:
	  InternalIP:  192.168.58.2
	  Hostname:    old-k8s-version-20220601105850-6708
	Capacity:
	 cpu:                8
	 ephemeral-storage:  304695084Ki
	 hugepages-1Gi:      0
	 hugepages-2Mi:      0
	 memory:             32873824Ki
	 pods:               110
	Allocatable:
	 cpu:                8
	 ephemeral-storage:  304695084Ki
	 hugepages-1Gi:      0
	 hugepages-2Mi:      0
	 memory:             32873824Ki
	 pods:               110
	System Info:
	 Machine ID:                 e0d7477b601740b2a7c32c13851e505c
	 System UUID:                cf752223-716a-46c7-b06a-74cba9af00dc
	 Boot ID:                    6ec46557-2d76-4d83-8353-3ee04fd961c4
	 Kernel Version:             5.13.0-1027-gcp
	 OS Image:                   Ubuntu 20.04.4 LTS
	 Operating System:           linux
	 Architecture:               amd64
	 Container Runtime Version:  containerd://1.6.4
	 Kubelet Version:            v1.16.0
	 Kube-Proxy Version:         v1.16.0
	PodCIDR:                     10.244.0.0/24
	PodCIDRs:                    10.244.0.0/24
	Non-terminated Pods:         (6 in total)
	  Namespace                  Name                                                           CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                  ----                                                           ------------  ----------  ---------------  -------------  ---
	  kube-system                etcd-old-k8s-version-20220601105850-6708                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                kindnet-wnn66                                                  100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      13m
	  kube-system                kube-apiserver-old-k8s-version-20220601105850-6708             250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                kube-controller-manager-old-k8s-version-20220601105850-6708    200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                kube-proxy-gh8fk                                               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                kube-scheduler-old-k8s-version-20220601105850-6708             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                650m (8%!)(MISSING)  100m (1%!)(MISSING)
	  memory             50Mi (0%!)(MISSING)  50Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From                                             Message
	  ----    ------                   ----               ----                                             -------
	  Normal  NodeHasSufficientMemory  13m (x8 over 13m)  kubelet, old-k8s-version-20220601105850-6708     Node old-k8s-version-20220601105850-6708 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m (x8 over 13m)  kubelet, old-k8s-version-20220601105850-6708     Node old-k8s-version-20220601105850-6708 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m (x7 over 13m)  kubelet, old-k8s-version-20220601105850-6708     Node old-k8s-version-20220601105850-6708 status is now: NodeHasSufficientPID
	  Normal  Starting                 13m                kube-proxy, old-k8s-version-20220601105850-6708  Starting kube-proxy.
	
	* 
	* ==> dmesg <==
	* [Jun 1 10:57] IPv4: martian source 10.244.0.2 from 10.244.0.2, on dev bridge
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff ce 23 50 3b c9 fd 08 06
	[  +0.595787] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ce 23 50 3b c9 fd 08 06
	[  +0.586201] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 66 8a d2 ee 2a 9a 08 06
	[Jun 1 10:58] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 9a c7 83 3b 8c 1d 08 06
	[  +0.000302] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 66 8a d2 ee 2a 9a 08 06
	[ +12.725641] IPv4: martian source 10.244.0.2 from 10.244.0.2, on dev bridge
	[  +0.000013] ll header: 00000000: ff ff ff ff ff ff 3a 82 da d0 8c 8d 08 06
	[  +0.906380] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 3a 82 da d0 8c 8d 08 06
	[  +0.302806] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff aa d1 05 1a ef 66 08 06
	[ +13.606547] IPv4: martian source 10.85.0.2 from 10.85.0.2, on dev cni0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 7e ad f3 21 27 5b 08 06
	[  +8.626375] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff f6 c1 ef be 01 dd 08 06
	[  +0.000348] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff aa d1 05 1a ef 66 08 06
	[  +8.278909] IPv4: martian source 10.85.0.2 from 10.85.0.2, on dev cni0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 72 fb cd eb 09 11 08 06
	[Jun 1 11:01] IPv4: martian destination 127.0.0.11 from 10.244.0.2, dev vethb04b9442
	
	* 
	* ==> etcd [c7d9c7649995996591a343170bd6f7b866e1d7a5c3c4c910856af8592831e768] <==
	* 2022-06-01 11:17:23.765693 I | raft: b2c6679ac05f2cf1 became follower at term 0
	2022-06-01 11:17:23.765701 I | raft: newRaft b2c6679ac05f2cf1 [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0]
	2022-06-01 11:17:23.765706 I | raft: b2c6679ac05f2cf1 became follower at term 1
	2022-06-01 11:17:23.770216 W | auth: simple token is not cryptographically signed
	2022-06-01 11:17:23.773114 I | etcdserver: starting server... [version: 3.3.15, cluster version: to_be_decided]
	2022-06-01 11:17:23.775073 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, ca = , trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	2022-06-01 11:17:23.775389 I | embed: listening for metrics on http://192.168.58.2:2381
	2022-06-01 11:17:23.775515 I | embed: listening for metrics on http://127.0.0.1:2381
	2022-06-01 11:17:23.775709 I | etcdserver: b2c6679ac05f2cf1 as single-node; fast-forwarding 9 ticks (election ticks 10)
	2022-06-01 11:17:23.775847 I | etcdserver/membership: added member b2c6679ac05f2cf1 [https://192.168.58.2:2380] to cluster 3a56e4ca95e2355c
	2022-06-01 11:17:24.666034 I | raft: b2c6679ac05f2cf1 is starting a new election at term 1
	2022-06-01 11:17:24.666080 I | raft: b2c6679ac05f2cf1 became candidate at term 2
	2022-06-01 11:17:24.666097 I | raft: b2c6679ac05f2cf1 received MsgVoteResp from b2c6679ac05f2cf1 at term 2
	2022-06-01 11:17:24.666109 I | raft: b2c6679ac05f2cf1 became leader at term 2
	2022-06-01 11:17:24.666115 I | raft: raft.node: b2c6679ac05f2cf1 elected leader b2c6679ac05f2cf1 at term 2
	2022-06-01 11:17:24.666444 I | etcdserver: published {Name:old-k8s-version-20220601105850-6708 ClientURLs:[https://192.168.58.2:2379]} to cluster 3a56e4ca95e2355c
	2022-06-01 11:17:24.666483 I | embed: ready to serve client requests
	2022-06-01 11:17:24.666512 I | etcdserver: setting up the initial cluster version to 3.3
	2022-06-01 11:17:24.666542 I | embed: ready to serve client requests
	2022-06-01 11:17:24.667781 N | etcdserver/membership: set the initial cluster version to 3.3
	2022-06-01 11:17:24.667915 I | etcdserver/api: enabled capabilities for version 3.3
	2022-06-01 11:17:24.669123 I | embed: serving client requests on 127.0.0.1:2379
	2022-06-01 11:17:24.669320 I | embed: serving client requests on 192.168.58.2:2379
	2022-06-01 11:27:24.788332 I | mvcc: store.index: compact 559
	2022-06-01 11:27:24.789226 I | mvcc: finished scheduled compaction at 559 (took 552.653µs)
	
	* 
	* ==> kernel <==
	*  11:30:51 up  1:13,  0 users,  load average: 0.30, 0.75, 1.41
	Linux old-k8s-version-20220601105850-6708 5.13.0-1027-gcp #32~20.04.1-Ubuntu SMP Thu May 26 10:53:08 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.4 LTS"
	
	* 
	* ==> kube-apiserver [c6dd696a23428853e9dd6984647f57b50a36f6b1945411c85942976aea45fbac] <==
	* I0601 11:23:28.407407       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W0601 11:23:28.407485       1 handler_proxy.go:99] no RequestInfo found in the context
	E0601 11:23:28.407525       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0601 11:23:28.407538       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0601 11:25:28.407792       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W0601 11:25:28.407917       1 handler_proxy.go:99] no RequestInfo found in the context
	E0601 11:25:28.408012       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0601 11:25:28.408031       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0601 11:27:28.408798       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W0601 11:27:28.408892       1 handler_proxy.go:99] no RequestInfo found in the context
	E0601 11:27:28.408961       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0601 11:27:28.408979       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0601 11:28:28.409200       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W0601 11:28:28.409288       1 handler_proxy.go:99] no RequestInfo found in the context
	E0601 11:28:28.409365       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0601 11:28:28.409380       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0601 11:30:28.409577       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W0601 11:30:28.409660       1 handler_proxy.go:99] no RequestInfo found in the context
	E0601 11:30:28.409722       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0601 11:30:28.409735       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	* 
	* ==> kube-controller-manager [0347453bb77d9cbbda5d7387d32f01c8f751abedb22f454acabca801b977d1de] <==
	* E0601 11:24:20.638416       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0601 11:24:43.385059       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0601 11:24:50.889791       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0601 11:25:15.386565       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0601 11:25:21.141055       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0601 11:25:47.388004       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0601 11:25:51.392549       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0601 11:26:19.389313       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0601 11:26:21.644115       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0601 11:26:51.390803       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0601 11:26:51.895598       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	E0601 11:27:22.146816       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0601 11:27:23.392266       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0601 11:27:52.398252       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0601 11:27:55.393686       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0601 11:28:22.649724       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0601 11:28:27.395123       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0601 11:28:52.901176       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0601 11:28:59.396611       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0601 11:29:23.152761       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0601 11:29:31.397973       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0601 11:29:53.405138       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0601 11:30:03.399437       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0601 11:30:23.656728       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0601 11:30:35.400784       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	
	* 
	* ==> kube-proxy [bba155e14e8f3aac6b7847d5dd32a5f7b82602b1afa57eb4054e328a8e89213d] <==
	* W0601 11:17:47.485358       1 server_others.go:329] Flag proxy-mode="" unknown, assuming iptables proxy
	I0601 11:17:47.491254       1 node.go:135] Successfully retrieved node IP: 192.168.58.2
	I0601 11:17:47.491284       1 server_others.go:149] Using iptables Proxier.
	I0601 11:17:47.491627       1 server.go:529] Version: v1.16.0
	I0601 11:17:47.492168       1 config.go:131] Starting endpoints config controller
	I0601 11:17:47.492204       1 shared_informer.go:197] Waiting for caches to sync for endpoints config
	I0601 11:17:47.492231       1 config.go:313] Starting service config controller
	I0601 11:17:47.492247       1 shared_informer.go:197] Waiting for caches to sync for service config
	I0601 11:17:47.592479       1 shared_informer.go:204] Caches are synced for service config 
	I0601 11:17:47.592481       1 shared_informer.go:204] Caches are synced for endpoints config 
	
	* 
	* ==> kube-scheduler [a946b8ec63ccdd39b9f960ce249eaec023b354513cddc382bd365e4c96999dbd] <==
	* I0601 11:17:27.465268       1 deprecated_insecure_serving.go:51] Serving healthz insecurely on [::]:10251
	I0601 11:17:27.466328       1 secure_serving.go:123] Serving securely on 127.0.0.1:10259
	E0601 11:17:27.482456       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0601 11:17:27.482502       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0601 11:17:27.482606       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0601 11:17:27.483167       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0601 11:17:27.483240       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0601 11:17:27.554063       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0601 11:17:27.555765       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0601 11:17:27.555838       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0601 11:17:27.560122       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0601 11:17:27.560124       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0601 11:17:27.560197       1 reflector.go:123] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:236: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0601 11:17:28.554528       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0601 11:17:28.556334       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0601 11:17:28.558148       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0601 11:17:28.559674       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0601 11:17:28.560773       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0601 11:17:28.561950       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0601 11:17:28.563536       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0601 11:17:28.564733       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0601 11:17:28.565826       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0601 11:17:28.566988       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0601 11:17:28.568112       1 reflector.go:123] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:236: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0601 11:17:49.163701       1 factory.go:585] pod is already present in the activeQ
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Wed 2022-06-01 11:11:54 UTC, end at Wed 2022-06-01 11:30:51 UTC. --
	Jun 01 11:29:03 old-k8s-version-20220601105850-6708 kubelet[2976]: E0601 11:29:03.258590    2976 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Jun 01 11:29:08 old-k8s-version-20220601105850-6708 kubelet[2976]: E0601 11:29:08.259391    2976 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Jun 01 11:29:09 old-k8s-version-20220601105850-6708 kubelet[2976]: E0601 11:29:09.171185    2976 pod_workers.go:191] Error syncing pod 655a68dd-59d6-46fa-9b98-018e0adc10d0 ("kindnet-wnn66_kube-system(655a68dd-59d6-46fa-9b98-018e0adc10d0)"), skipping: failed to "StartContainer" for "kindnet-cni" with CrashLoopBackOff: "back-off 40s restarting failed container=kindnet-cni pod=kindnet-wnn66_kube-system(655a68dd-59d6-46fa-9b98-018e0adc10d0)"
	Jun 01 11:29:13 old-k8s-version-20220601105850-6708 kubelet[2976]: E0601 11:29:13.260240    2976 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Jun 01 11:29:18 old-k8s-version-20220601105850-6708 kubelet[2976]: E0601 11:29:18.261029    2976 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Jun 01 11:29:23 old-k8s-version-20220601105850-6708 kubelet[2976]: E0601 11:29:23.024232    2976 pod_workers.go:191] Error syncing pod 655a68dd-59d6-46fa-9b98-018e0adc10d0 ("kindnet-wnn66_kube-system(655a68dd-59d6-46fa-9b98-018e0adc10d0)"), skipping: failed to "StartContainer" for "kindnet-cni" with CrashLoopBackOff: "back-off 40s restarting failed container=kindnet-cni pod=kindnet-wnn66_kube-system(655a68dd-59d6-46fa-9b98-018e0adc10d0)"
	Jun 01 11:29:23 old-k8s-version-20220601105850-6708 kubelet[2976]: E0601 11:29:23.261825    2976 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Jun 01 11:29:28 old-k8s-version-20220601105850-6708 kubelet[2976]: E0601 11:29:28.262638    2976 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Jun 01 11:29:33 old-k8s-version-20220601105850-6708 kubelet[2976]: E0601 11:29:33.263387    2976 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Jun 01 11:29:38 old-k8s-version-20220601105850-6708 kubelet[2976]: E0601 11:29:38.023672    2976 pod_workers.go:191] Error syncing pod 655a68dd-59d6-46fa-9b98-018e0adc10d0 ("kindnet-wnn66_kube-system(655a68dd-59d6-46fa-9b98-018e0adc10d0)"), skipping: failed to "StartContainer" for "kindnet-cni" with CrashLoopBackOff: "back-off 40s restarting failed container=kindnet-cni pod=kindnet-wnn66_kube-system(655a68dd-59d6-46fa-9b98-018e0adc10d0)"
	Jun 01 11:29:38 old-k8s-version-20220601105850-6708 kubelet[2976]: E0601 11:29:38.264132    2976 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Jun 01 11:29:43 old-k8s-version-20220601105850-6708 kubelet[2976]: E0601 11:29:43.265080    2976 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Jun 01 11:29:48 old-k8s-version-20220601105850-6708 kubelet[2976]: E0601 11:29:48.265863    2976 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Jun 01 11:29:53 old-k8s-version-20220601105850-6708 kubelet[2976]: E0601 11:29:53.266500    2976 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Jun 01 11:29:58 old-k8s-version-20220601105850-6708 kubelet[2976]: E0601 11:29:58.267225    2976 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Jun 01 11:30:03 old-k8s-version-20220601105850-6708 kubelet[2976]: E0601 11:30:03.267962    2976 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Jun 01 11:30:08 old-k8s-version-20220601105850-6708 kubelet[2976]: E0601 11:30:08.268637    2976 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Jun 01 11:30:13 old-k8s-version-20220601105850-6708 kubelet[2976]: E0601 11:30:13.269208    2976 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Jun 01 11:30:18 old-k8s-version-20220601105850-6708 kubelet[2976]: E0601 11:30:18.270027    2976 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Jun 01 11:30:23 old-k8s-version-20220601105850-6708 kubelet[2976]: E0601 11:30:23.270756    2976 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Jun 01 11:30:28 old-k8s-version-20220601105850-6708 kubelet[2976]: E0601 11:30:28.271537    2976 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Jun 01 11:30:33 old-k8s-version-20220601105850-6708 kubelet[2976]: E0601 11:30:33.272366    2976 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Jun 01 11:30:38 old-k8s-version-20220601105850-6708 kubelet[2976]: E0601 11:30:38.273129    2976 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Jun 01 11:30:43 old-k8s-version-20220601105850-6708 kubelet[2976]: E0601 11:30:43.273885    2976 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Jun 01 11:30:48 old-k8s-version-20220601105850-6708 kubelet[2976]: E0601 11:30:48.274595    2976 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-20220601105850-6708 -n old-k8s-version-20220601105850-6708
helpers_test.go:261: (dbg) Run:  kubectl --context old-k8s-version-20220601105850-6708 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:270: non-running pods: coredns-5644d7b6d9-86f9d metrics-server-6f89b5864b-hf7p6 storage-provisioner dashboard-metrics-scraper-6b84985989-7n8xp kubernetes-dashboard-6fb5469cf5-8d9mk
helpers_test.go:272: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:275: (dbg) Run:  kubectl --context old-k8s-version-20220601105850-6708 describe pod coredns-5644d7b6d9-86f9d metrics-server-6f89b5864b-hf7p6 storage-provisioner dashboard-metrics-scraper-6b84985989-7n8xp kubernetes-dashboard-6fb5469cf5-8d9mk
helpers_test.go:275: (dbg) Non-zero exit: kubectl --context old-k8s-version-20220601105850-6708 describe pod coredns-5644d7b6d9-86f9d metrics-server-6f89b5864b-hf7p6 storage-provisioner dashboard-metrics-scraper-6b84985989-7n8xp kubernetes-dashboard-6fb5469cf5-8d9mk: exit status 1 (56.204862ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-5644d7b6d9-86f9d" not found
	Error from server (NotFound): pods "metrics-server-6f89b5864b-hf7p6" not found
	Error from server (NotFound): pods "storage-provisioner" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-6b84985989-7n8xp" not found
	Error from server (NotFound): pods "kubernetes-dashboard-6fb5469cf5-8d9mk" not found

                                                
                                                
** /stderr **
helpers_test.go:277: kubectl --context old-k8s-version-20220601105850-6708 describe pod coredns-5644d7b6d9-86f9d metrics-server-6f89b5864b-hf7p6 storage-provisioner dashboard-metrics-scraper-6b84985989-7n8xp kubernetes-dashboard-6fb5469cf5-8d9mk: exit status 1
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (542.36s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (542.41s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:276: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:342: "kubernetes-dashboard-8469778f77-q4zvb" [394f1f7a-2fdc-47f8-a080-4d40aefe4b3f] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
E0601 11:25:40.379998    6708 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/no-preload-20220601105939-6708/client.crt: no such file or directory
E0601 11:26:21.552175    6708 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/kindnet-20220601104838-6708/client.crt: no such file or directory
E0601 11:27:03.423727    6708 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/no-preload-20220601105939-6708/client.crt: no such file or directory
E0601 11:27:12.929378    6708 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/addons-20220601102024-6708/client.crt: no such file or directory
E0601 11:27:21.870636    6708 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/functional-20220601102456-6708/client.crt: no such file or directory
E0601 11:27:54.652486    6708 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/enable-default-cni-20220601104837-6708/client.crt: no such file or directory
E0601 11:28:34.904466    6708 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/bridge-20220601104837-6708/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
E0601 11:33:34.904605    6708 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/bridge-20220601104837-6708/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
E0601 11:33:35.975379    6708 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/addons-20220601102024-6708/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
E0601 11:33:47.194963    6708 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/old-k8s-version-20220601105850-6708/client.crt: no such file or directory
E0601 11:33:47.200200    6708 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/old-k8s-version-20220601105850-6708/client.crt: no such file or directory
E0601 11:33:47.210430    6708 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/old-k8s-version-20220601105850-6708/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
E0601 11:33:47.230748    6708 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/old-k8s-version-20220601105850-6708/client.crt: no such file or directory
E0601 11:33:47.270977    6708 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/old-k8s-version-20220601105850-6708/client.crt: no such file or directory
E0601 11:33:47.351242    6708 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/old-k8s-version-20220601105850-6708/client.crt: no such file or directory
E0601 11:33:47.511682    6708 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/old-k8s-version-20220601105850-6708/client.crt: no such file or directory
E0601 11:33:47.832228    6708 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/old-k8s-version-20220601105850-6708/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
E0601 11:33:48.473030    6708 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/old-k8s-version-20220601105850-6708/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
E0601 11:33:49.753729    6708 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/old-k8s-version-20220601105850-6708/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
E0601 11:33:52.314445    6708 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/old-k8s-version-20220601105850-6708/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
E0601 11:33:57.435159    6708 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/old-k8s-version-20220601105850-6708/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
E0601 11:34:07.675954    6708 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/old-k8s-version-20220601105850-6708/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
E0601 11:34:09.242399    6708 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/calico-20220601104839-6708/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
E0601 11:34:22.034708    6708 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/cilium-20220601104839-6708/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
E0601 11:34:24.596522    6708 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/kindnet-20220601104838-6708/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
E0601 11:34:28.156808    6708 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/old-k8s-version-20220601105850-6708/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:276: ***** TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: timed out waiting for the condition ****
start_stop_delete_test.go:276: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-20220601110327-6708 -n embed-certs-20220601110327-6708
start_stop_delete_test.go:276: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2022-06-01 11:34:30.612145776 +0000 UTC m=+4471.611766014
start_stop_delete_test.go:276: (dbg) Run:  kubectl --context embed-certs-20220601110327-6708 describe po kubernetes-dashboard-8469778f77-q4zvb -n kubernetes-dashboard
start_stop_delete_test.go:276: (dbg) Non-zero exit: kubectl --context embed-certs-20220601110327-6708 describe po kubernetes-dashboard-8469778f77-q4zvb -n kubernetes-dashboard: context deadline exceeded (1.372µs)
start_stop_delete_test.go:276: kubectl --context embed-certs-20220601110327-6708 describe po kubernetes-dashboard-8469778f77-q4zvb -n kubernetes-dashboard: context deadline exceeded
start_stop_delete_test.go:276: (dbg) Run:  kubectl --context embed-certs-20220601110327-6708 logs kubernetes-dashboard-8469778f77-q4zvb -n kubernetes-dashboard
start_stop_delete_test.go:276: (dbg) Non-zero exit: kubectl --context embed-certs-20220601110327-6708 logs kubernetes-dashboard-8469778f77-q4zvb -n kubernetes-dashboard: context deadline exceeded (135ns)
start_stop_delete_test.go:276: kubectl --context embed-certs-20220601110327-6708 logs kubernetes-dashboard-8469778f77-q4zvb -n kubernetes-dashboard: context deadline exceeded
start_stop_delete_test.go:277: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: timed out waiting for the condition
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect embed-certs-20220601110327-6708
helpers_test.go:235: (dbg) docker inspect embed-certs-20220601110327-6708:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "b77a5d5e61bfa6e31aa165e07cef7da486c7219a464787e9c662bc91861f785d",
	        "Created": "2022-06-01T11:03:36.104826313Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 270313,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-06-01T11:16:27.742788253Z",
	            "FinishedAt": "2022-06-01T11:16:26.518323114Z"
	        },
	        "Image": "sha256:5fc9565d342f677dd8987c0c7656d8d58147ab45c932c9076935b38a770e4cf7",
	        "ResolvConfPath": "/var/lib/docker/containers/b77a5d5e61bfa6e31aa165e07cef7da486c7219a464787e9c662bc91861f785d/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/b77a5d5e61bfa6e31aa165e07cef7da486c7219a464787e9c662bc91861f785d/hostname",
	        "HostsPath": "/var/lib/docker/containers/b77a5d5e61bfa6e31aa165e07cef7da486c7219a464787e9c662bc91861f785d/hosts",
	        "LogPath": "/var/lib/docker/containers/b77a5d5e61bfa6e31aa165e07cef7da486c7219a464787e9c662bc91861f785d/b77a5d5e61bfa6e31aa165e07cef7da486c7219a464787e9c662bc91861f785d-json.log",
	        "Name": "/embed-certs-20220601110327-6708",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-20220601110327-6708:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "embed-certs-20220601110327-6708",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/af793a78707e49005a668827ee182b67b74ca83491cbfb43256e792a6be931d8-init/diff:/var/lib/docker/overlay2/b8d8db1a2aa7e68bc30cc8784920412f67de75d287f26f41df14fe77e9cd01aa/diff:/var/lib/docker/overlay2/e05914bc392f34941f075ddf1f08984ba653e41980a3a12b3f0ec7bc66faed2c/diff:/var/lib/docker/overlay2/311290be1e68cfaf248422398f2ee037b662e83e8a80c393cf1f35af46c871e3/diff:/var/lib/docker/overlay2/df5bd86ed0e0c9740e1a389f0dd435837b730043ff123ee198aa28f82511f21c/diff:/var/lib/docker/overlay2/52ebf0baed5a1f4378d84fd6b94c9c1756fef7c7837d0766f77ef29b2104f39c/diff:/var/lib/docker/overlay2/19fdc64f1948c51309fb85761a84e7787a06f91f43e3d554c0507c5291bda802/diff:/var/lib/docker/overlay2/8fbe94a5f5ec7e7b87e3294d917aaba04c0d5a1072a4876ca3171f7b416e78d1/diff:/var/lib/docker/overlay2/3f2aec9d3b6e64c9251bd84e36639908a944a57741768984d5a66f5f34ed2288/diff:/var/lib/docker/overlay2/f4b59ae0a52d88a0325a281689c43f769408e14f70b9c94b771e70af140d7538/diff:/var/lib/docker/overlay2/1b9610
0ef86fc38fba7ce796c89f6a0f0c16c7fc1a94ff1a7f2021a01dd5471e/diff:/var/lib/docker/overlay2/10388fac28961ac0e67628b98602c8c77b04d12b21cd25a8d9adc05c1261252b/diff:/var/lib/docker/overlay2/efcc44ba0e0e6acd23e74db49106211785c2519f22b858252b43b17e927095e4/diff:/var/lib/docker/overlay2/3dbc9b9ec4e689c631fadb88ec67ad1f44f1059642f0d9e9740fa4523681476a/diff:/var/lib/docker/overlay2/196519dbdfaceb51fe77bd0517b4a726c63133e2f1a44cc71539d859f920996f/diff:/var/lib/docker/overlay2/bf269014ddb2ded291bc1f7a299a168567e8ffb016d7d4ba5ad3681eb1c988ba/diff:/var/lib/docker/overlay2/dc3a1c86dd14e2cba77aa4a9424d61aa34efc817c2c14b8f79d66fb1c19132ea/diff:/var/lib/docker/overlay2/7151cfa81a30a8e2bacf0e7e97eb21e825844e9ee99d4d774c443cf4df3042bf/diff:/var/lib/docker/overlay2/4827c7b9079e215b353e51dd539d7e28bf73341ea0a494df4654e9fd1b53d16c/diff:/var/lib/docker/overlay2/2da0eda04306aaeacd19a5cc85b885230b88e74f1791bdb022ebf4b39d85fae2/diff:/var/lib/docker/overlay2/1a0bdf8971fb0a44ff0e168623dbbb2d84b8e93f6d20d9351efd68470e4d4851/diff:/var/lib/d
ocker/overlay2/9ced6f582fc4ce00064f1f5e6ac922d838648fe94afc8e015c309e04004f10ca/diff:/var/lib/docker/overlay2/dd6d4c3166eb565aff2516953d5a8877a204214f2436d414475132ae70429cf7/diff:/var/lib/docker/overlay2/a1ace060e85891d54b26ff4be9fcbce36ffbb15cc4061eb4ccf0add8f82783df/diff:/var/lib/docker/overlay2/bc8b93bfba93e7da2c573ae8b6405ebff526153f6a8b0659aebaf044dc7e8f43/diff:/var/lib/docker/overlay2/c6292624b658b5761ddc277e4b50f1bd9d32fb9a2ad4a01647d6482fa0d29eb3/diff:/var/lib/docker/overlay2/cfe8f35eeb0a80a3c747eac0dfd9195da1fa3d9f92f0b7866d46f3517f3b10ee/diff:/var/lib/docker/overlay2/90bc3b9378b5761de7575f1a82d48e4b4ebf50af153eafa5a565585e136b87f8/diff:/var/lib/docker/overlay2/e1b928a2483870df7fdf4adb3b4002f9effe1db7fbff925b24005f47290b2915/diff:/var/lib/docker/overlay2/4758c75ab63fd3f43ae7b654bc93566ded69acea0e92caf3043ef6eeeec9ca1b/diff:/var/lib/docker/overlay2/8558da5809877030d87982052e5011fb04b8bb9646d0f3c1d4aa10f2d7926592/diff:/var/lib/docker/overlay2/f6400cfae811f55736f55064c8949e18ac2dc1175a5149bb0382a79fa92
4f3f3/diff:/var/lib/docker/overlay2/875d5278ff8445b84261d4586c6a14fbbd9c13beff1fe9252591162e4d91a0dc/diff:/var/lib/docker/overlay2/12d9f85a229e1386e37fb609020fdcb5535c67ce811012fd646182559e4ee754/diff:/var/lib/docker/overlay2/d5c5b85272d7a8b0b62da488c8cba8d883a0adcfc9f1b2f6ad2f856b4f13e5f7/diff:/var/lib/docker/overlay2/5f6feb9e059c22491e287804f38f097fda984dc9376ba28ae810e13dcf27394f/diff:/var/lib/docker/overlay2/113a715b1135f09b959295e960aeaa36846ad54a6fe46fdd53d061bc3fe114e3/diff:/var/lib/docker/overlay2/265096a57a98130b8073aa41d0a22f0ada5a391943e490ac1c281a634e22cba0/diff:/var/lib/docker/overlay2/15f57e6dc9a6b382240a9335aae067454048b2eb6d0094f2d3c8c115be34311a/diff:/var/lib/docker/overlay2/45ca8d44d46a41b4493f52c0048d08b8d5ff4c1b28233ab8940f6422df1f1273/diff:/var/lib/docker/overlay2/d555005a13c251ef928389a1b06d251a17378cf3ec68af5a4d6c849412d3f69f/diff:/var/lib/docker/overlay2/80b8c06850dacfe2ca4db4e0fa4d2d0dd6997cf495a7f7da9b95a996694144c5/diff",
	                "MergedDir": "/var/lib/docker/overlay2/af793a78707e49005a668827ee182b67b74ca83491cbfb43256e792a6be931d8/merged",
	                "UpperDir": "/var/lib/docker/overlay2/af793a78707e49005a668827ee182b67b74ca83491cbfb43256e792a6be931d8/diff",
	                "WorkDir": "/var/lib/docker/overlay2/af793a78707e49005a668827ee182b67b74ca83491cbfb43256e792a6be931d8/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-20220601110327-6708",
	                "Source": "/var/lib/docker/volumes/embed-certs-20220601110327-6708/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-20220601110327-6708",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-20220601110327-6708",
	                "name.minikube.sigs.k8s.io": "embed-certs-20220601110327-6708",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "72ab588bc7e123d3b05f17bdda997b104506e595ecdeb222d14dd57971293f56",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49437"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49436"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49433"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49435"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49434"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/72ab588bc7e1",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-20220601110327-6708": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "b77a5d5e61bf",
	                        "embed-certs-20220601110327-6708"
	                    ],
	                    "NetworkID": "85c31b5e416e869b4ae1612c11e4fd39718a187a5009c211794c61313cb0c682",
	                    "EndpointID": "4966797cb9c652639f31bd37d26023d2cadd1e64690ba73eb6ab2fe001962d43",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-20220601110327-6708 -n embed-certs-20220601110327-6708
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-20220601110327-6708 logs -n 25
E0601 11:34:31.087106    6708 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/ingress-addon-legacy-20220601102806-6708/client.crt: no such file or directory
helpers_test.go:252: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------------------------------|------------------------------------------------|---------|----------------|---------------------|---------------------|
	| Command |                            Args                            |                    Profile                     |  User   |    Version     |     Start Time      |      End Time       |
	|---------|------------------------------------------------------------|------------------------------------------------|---------|----------------|---------------------|---------------------|
	| start   | -p newest-cni-20220601111420-6708 --memory=2200            | newest-cni-20220601111420-6708                 | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:14 UTC | 01 Jun 22 11:15 UTC |
	|         | --alsologtostderr --wait=apiserver,system_pods,default_sa  |                                                |         |                |                     |                     |
	|         | --feature-gates ServerSideApply=true --network-plugin=cni  |                                                |         |                |                     |                     |
	|         | --extra-config=kubelet.network-plugin=cni                  |                                                |         |                |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |                                                |         |                |                     |                     |
	|         | --driver=docker  --container-runtime=containerd            |                                                |         |                |                     |                     |
	|         | --kubernetes-version=v1.23.6                               |                                                |         |                |                     |                     |
	| addons  | enable metrics-server -p                                   | newest-cni-20220601111420-6708                 | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:15 UTC | 01 Jun 22 11:15 UTC |
	|         | newest-cni-20220601111420-6708                             |                                                |         |                |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                                                |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                     |                                                |         |                |                     |                     |
	| stop    | -p                                                         | newest-cni-20220601111420-6708                 | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:15 UTC | 01 Jun 22 11:15 UTC |
	|         | newest-cni-20220601111420-6708                             |                                                |         |                |                     |                     |
	|         | --alsologtostderr -v=3                                     |                                                |         |                |                     |                     |
	| addons  | enable dashboard -p                                        | newest-cni-20220601111420-6708                 | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:15 UTC | 01 Jun 22 11:15 UTC |
	|         | newest-cni-20220601111420-6708                             |                                                |         |                |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                |         |                |                     |                     |
	| start   | -p newest-cni-20220601111420-6708 --memory=2200            | newest-cni-20220601111420-6708                 | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:15 UTC | 01 Jun 22 11:15 UTC |
	|         | --alsologtostderr --wait=apiserver,system_pods,default_sa  |                                                |         |                |                     |                     |
	|         | --feature-gates ServerSideApply=true --network-plugin=cni  |                                                |         |                |                     |                     |
	|         | --extra-config=kubelet.network-plugin=cni                  |                                                |         |                |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |                                                |         |                |                     |                     |
	|         | --driver=docker  --container-runtime=containerd            |                                                |         |                |                     |                     |
	|         | --kubernetes-version=v1.23.6                               |                                                |         |                |                     |                     |
	| ssh     | -p                                                         | newest-cni-20220601111420-6708                 | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:15 UTC | 01 Jun 22 11:15 UTC |
	|         | newest-cni-20220601111420-6708                             |                                                |         |                |                     |                     |
	|         | sudo crictl images -o json                                 |                                                |         |                |                     |                     |
	| pause   | -p                                                         | newest-cni-20220601111420-6708                 | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:15 UTC | 01 Jun 22 11:15 UTC |
	|         | newest-cni-20220601111420-6708                             |                                                |         |                |                     |                     |
	|         | --alsologtostderr -v=1                                     |                                                |         |                |                     |                     |
	| unpause | -p                                                         | newest-cni-20220601111420-6708                 | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:15 UTC | 01 Jun 22 11:16 UTC |
	|         | newest-cni-20220601111420-6708                             |                                                |         |                |                     |                     |
	|         | --alsologtostderr -v=1                                     |                                                |         |                |                     |                     |
	| delete  | -p                                                         | newest-cni-20220601111420-6708                 | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:16 UTC | 01 Jun 22 11:16 UTC |
	|         | newest-cni-20220601111420-6708                             |                                                |         |                |                     |                     |
	| delete  | -p                                                         | newest-cni-20220601111420-6708                 | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:16 UTC | 01 Jun 22 11:16 UTC |
	|         | newest-cni-20220601111420-6708                             |                                                |         |                |                     |                     |
	| logs    | embed-certs-20220601110327-6708                            | embed-certs-20220601110327-6708                | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:16 UTC | 01 Jun 22 11:16 UTC |
	|         | logs -n 25                                                 |                                                |         |                |                     |                     |
	| logs    | embed-certs-20220601110327-6708                            | embed-certs-20220601110327-6708                | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:16 UTC | 01 Jun 22 11:16 UTC |
	|         | logs -n 25                                                 |                                                |         |                |                     |                     |
	| addons  | enable metrics-server -p                                   | embed-certs-20220601110327-6708                | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:16 UTC | 01 Jun 22 11:16 UTC |
	|         | embed-certs-20220601110327-6708                            |                                                |         |                |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                                                |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                     |                                                |         |                |                     |                     |
	| stop    | -p                                                         | embed-certs-20220601110327-6708                | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:16 UTC | 01 Jun 22 11:16 UTC |
	|         | embed-certs-20220601110327-6708                            |                                                |         |                |                     |                     |
	|         | --alsologtostderr -v=3                                     |                                                |         |                |                     |                     |
	| addons  | enable dashboard -p                                        | embed-certs-20220601110327-6708                | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:16 UTC | 01 Jun 22 11:16 UTC |
	|         | embed-certs-20220601110327-6708                            |                                                |         |                |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                |         |                |                     |                     |
	| logs    | default-k8s-different-port-20220601110654-6708             | default-k8s-different-port-20220601110654-6708 | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:19 UTC | 01 Jun 22 11:19 UTC |
	|         | logs -n 25                                                 |                                                |         |                |                     |                     |
	| logs    | default-k8s-different-port-20220601110654-6708             | default-k8s-different-port-20220601110654-6708 | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:19 UTC | 01 Jun 22 11:19 UTC |
	|         | logs -n 25                                                 |                                                |         |                |                     |                     |
	| addons  | enable metrics-server -p                                   | default-k8s-different-port-20220601110654-6708 | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:19 UTC | 01 Jun 22 11:19 UTC |
	|         | default-k8s-different-port-20220601110654-6708             |                                                |         |                |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                                                |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                     |                                                |         |                |                     |                     |
	| stop    | -p                                                         | default-k8s-different-port-20220601110654-6708 | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:19 UTC | 01 Jun 22 11:19 UTC |
	|         | default-k8s-different-port-20220601110654-6708             |                                                |         |                |                     |                     |
	|         | --alsologtostderr -v=3                                     |                                                |         |                |                     |                     |
	| addons  | enable dashboard -p                                        | default-k8s-different-port-20220601110654-6708 | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:19 UTC | 01 Jun 22 11:19 UTC |
	|         | default-k8s-different-port-20220601110654-6708             |                                                |         |                |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                |         |                |                     |                     |
	| logs    | old-k8s-version-20220601105850-6708                        | old-k8s-version-20220601105850-6708            | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:21 UTC | 01 Jun 22 11:21 UTC |
	|         | logs -n 25                                                 |                                                |         |                |                     |                     |
	| logs    | embed-certs-20220601110327-6708                            | embed-certs-20220601110327-6708                | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:25 UTC | 01 Jun 22 11:25 UTC |
	|         | logs -n 25                                                 |                                                |         |                |                     |                     |
	| logs    | default-k8s-different-port-20220601110654-6708             | default-k8s-different-port-20220601110654-6708 | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:28 UTC | 01 Jun 22 11:28 UTC |
	|         | logs -n 25                                                 |                                                |         |                |                     |                     |
	| logs    | old-k8s-version-20220601105850-6708                        | old-k8s-version-20220601105850-6708            | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:30 UTC | 01 Jun 22 11:30 UTC |
	|         | logs -n 25                                                 |                                                |         |                |                     |                     |
	| delete  | -p                                                         | old-k8s-version-20220601105850-6708            | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:30 UTC | 01 Jun 22 11:30 UTC |
	|         | old-k8s-version-20220601105850-6708                        |                                                |         |                |                     |                     |
	|---------|------------------------------------------------------------|------------------------------------------------|---------|----------------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/06/01 11:19:52
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.18.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0601 11:19:52.827023  276679 out.go:296] Setting OutFile to fd 1 ...
	I0601 11:19:52.827225  276679 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0601 11:19:52.827237  276679 out.go:309] Setting ErrFile to fd 2...
	I0601 11:19:52.827242  276679 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0601 11:19:52.827359  276679 root.go:322] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/bin
	I0601 11:19:52.827588  276679 out.go:303] Setting JSON to false
	I0601 11:19:52.828890  276679 start.go:115] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":3747,"bootTime":1654078646,"procs":456,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.13.0-1027-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0601 11:19:52.828955  276679 start.go:125] virtualization: kvm guest
	I0601 11:19:52.831944  276679 out.go:177] * [default-k8s-different-port-20220601110654-6708] minikube v1.26.0-beta.1 on Ubuntu 20.04 (kvm/amd64)
	I0601 11:19:52.833439  276679 out.go:177]   - MINIKUBE_LOCATION=14079
	I0601 11:19:52.833372  276679 notify.go:193] Checking for updates...
	I0601 11:19:52.835007  276679 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0601 11:19:52.836578  276679 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig
	I0601 11:19:52.837966  276679 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube
	I0601 11:19:52.839440  276679 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0601 11:19:52.841215  276679 config.go:178] Loaded profile config "default-k8s-different-port-20220601110654-6708": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.6
	I0601 11:19:52.841578  276679 driver.go:358] Setting default libvirt URI to qemu:///system
	I0601 11:19:52.880823  276679 docker.go:137] docker version: linux-20.10.16
	I0601 11:19:52.880897  276679 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0601 11:19:52.978177  276679 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:76 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:44 OomKillDisable:true NGoroutines:44 SystemTime:2022-06-01 11:19:52.908721136 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1027-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662795776 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:20.10.16 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0601 11:19:52.978275  276679 docker.go:254] overlay module found
	I0601 11:19:52.981078  276679 out.go:177] * Using the docker driver based on existing profile
	I0601 11:19:52.982316  276679 start.go:284] selected driver: docker
	I0601 11:19:52.982326  276679 start.go:806] validating driver "docker" against &{Name:default-k8s-different-port-20220601110654-6708 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:default-k8s-different-port-
20220601110654-6708 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8444 KubernetesVersion:v1.23.6 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.5.1@sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true
system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0601 11:19:52.982412  276679 start.go:817] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0601 11:19:52.983242  276679 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0601 11:19:53.085320  276679 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:76 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:44 OomKillDisable:true NGoroutines:44 SystemTime:2022-06-01 11:19:53.012439643 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1027-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662795776 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:20.10.16 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0601 11:19:53.085561  276679 start_flags.go:847] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0601 11:19:53.085581  276679 cni.go:95] Creating CNI manager for ""
	I0601 11:19:53.085589  276679 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0601 11:19:53.085608  276679 start_flags.go:306] config:
	{Name:default-k8s-different-port-20220601110654-6708 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:default-k8s-different-port-20220601110654-6708 Namespace:default APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8444 KubernetesVersion:v1.23.6 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.5.1@sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] Lis
tenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0601 11:19:53.088575  276679 out.go:177] * Starting control plane node default-k8s-different-port-20220601110654-6708 in cluster default-k8s-different-port-20220601110654-6708
	I0601 11:19:53.089964  276679 cache.go:120] Beginning downloading kic base image for docker with containerd
	I0601 11:19:53.091501  276679 out.go:177] * Pulling base image ...
	I0601 11:19:53.092800  276679 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime containerd
	I0601 11:19:53.092839  276679 preload.go:148] Found local preload: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.6-containerd-overlay2-amd64.tar.lz4
	I0601 11:19:53.092856  276679 cache.go:57] Caching tarball of preloaded images
	I0601 11:19:53.092897  276679 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a in local docker daemon
	I0601 11:19:53.093061  276679 preload.go:174] Found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.6-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0601 11:19:53.093076  276679 cache.go:60] Finished verifying existence of preloaded tar for  v1.23.6 on containerd
	I0601 11:19:53.093182  276679 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/default-k8s-different-port-20220601110654-6708/config.json ...
	I0601 11:19:53.136384  276679 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a in local docker daemon, skipping pull
	I0601 11:19:53.136410  276679 cache.go:141] gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a exists in daemon, skipping load
	I0601 11:19:53.136424  276679 cache.go:206] Successfully downloaded all kic artifacts
	I0601 11:19:53.136454  276679 start.go:352] acquiring machines lock for default-k8s-different-port-20220601110654-6708: {Name:mk7500f636009412c286b3a5b3a2182fb6b229b2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0601 11:19:53.136550  276679 start.go:356] acquired machines lock for "default-k8s-different-port-20220601110654-6708" in 69.025µs
	I0601 11:19:53.136570  276679 start.go:94] Skipping create...Using existing machine configuration
	I0601 11:19:53.136577  276679 fix.go:55] fixHost starting: 
	I0601 11:19:53.137208  276679 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220601110654-6708 --format={{.State.Status}}
	I0601 11:19:53.168642  276679 fix.go:103] recreateIfNeeded on default-k8s-different-port-20220601110654-6708: state=Stopped err=<nil>
	W0601 11:19:53.168681  276679 fix.go:129] unexpected machine state, will restart: <nil>
	I0601 11:19:53.170972  276679 out.go:177] * Restarting existing docker container for "default-k8s-different-port-20220601110654-6708" ...
	I0601 11:19:50.719789  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:19:53.220276  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:19:53.243194  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:19:55.243470  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:19:53.172500  276679 cli_runner.go:164] Run: docker start default-k8s-different-port-20220601110654-6708
	I0601 11:19:53.580842  276679 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220601110654-6708 --format={{.State.Status}}
	I0601 11:19:53.615796  276679 kic.go:416] container "default-k8s-different-port-20220601110654-6708" state is running.
	I0601 11:19:53.616193  276679 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-different-port-20220601110654-6708
	I0601 11:19:53.647308  276679 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/default-k8s-different-port-20220601110654-6708/config.json ...
	I0601 11:19:53.647503  276679 machine.go:88] provisioning docker machine ...
	I0601 11:19:53.647526  276679 ubuntu.go:169] provisioning hostname "default-k8s-different-port-20220601110654-6708"
	I0601 11:19:53.647560  276679 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220601110654-6708
	I0601 11:19:53.679842  276679 main.go:134] libmachine: Using SSH client type: native
	I0601 11:19:53.680106  276679 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7da240] 0x7dd2a0 <nil>  [] 0s} 127.0.0.1 49442 <nil> <nil>}
	I0601 11:19:53.680131  276679 main.go:134] libmachine: About to run SSH command:
	sudo hostname default-k8s-different-port-20220601110654-6708 && echo "default-k8s-different-port-20220601110654-6708" | sudo tee /etc/hostname
	I0601 11:19:53.680742  276679 main.go:134] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:55946->127.0.0.1:49442: read: connection reset by peer
	I0601 11:19:56.807880  276679 main.go:134] libmachine: SSH cmd err, output: <nil>: default-k8s-different-port-20220601110654-6708
	
	I0601 11:19:56.807951  276679 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220601110654-6708
	I0601 11:19:56.839321  276679 main.go:134] libmachine: Using SSH client type: native
	I0601 11:19:56.839475  276679 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7da240] 0x7dd2a0 <nil>  [] 0s} 127.0.0.1 49442 <nil> <nil>}
	I0601 11:19:56.839510  276679 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-different-port-20220601110654-6708' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-different-port-20220601110654-6708/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-different-port-20220601110654-6708' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0601 11:19:56.951445  276679 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0601 11:19:56.951473  276679 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/c
erts/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube}
	I0601 11:19:56.951491  276679 ubuntu.go:177] setting up certificates
	I0601 11:19:56.951499  276679 provision.go:83] configureAuth start
	I0601 11:19:56.951539  276679 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-different-port-20220601110654-6708
	I0601 11:19:56.982392  276679 provision.go:138] copyHostCerts
	I0601 11:19:56.982451  276679 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.pem, removing ...
	I0601 11:19:56.982464  276679 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.pem
	I0601 11:19:56.982537  276679 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.pem (1082 bytes)
	I0601 11:19:56.982652  276679 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cert.pem, removing ...
	I0601 11:19:56.982664  276679 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cert.pem
	I0601 11:19:56.982697  276679 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cert.pem (1123 bytes)
	I0601 11:19:56.982789  276679 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/key.pem, removing ...
	I0601 11:19:56.982802  276679 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/key.pem
	I0601 11:19:56.982829  276679 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/key.pem (1679 bytes)
	I0601 11:19:56.982876  276679 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca-key.pem org=jenkins.default-k8s-different-port-20220601110654-6708 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube default-k8s-different-port-20220601110654-6708]
	I0601 11:19:57.067574  276679 provision.go:172] copyRemoteCerts
	I0601 11:19:57.067626  276679 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0601 11:19:57.067654  276679 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220601110654-6708
	I0601 11:19:57.098669  276679 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49442 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/default-k8s-different-port-20220601110654-6708/id_rsa Username:docker}
	I0601 11:19:57.182904  276679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0601 11:19:57.199734  276679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/server.pem --> /etc/docker/server.pem (1306 bytes)
	I0601 11:19:57.215838  276679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0601 11:19:57.232284  276679 provision.go:86] duration metric: configureAuth took 280.774927ms
	I0601 11:19:57.232312  276679 ubuntu.go:193] setting minikube options for container-runtime
	I0601 11:19:57.232468  276679 config.go:178] Loaded profile config "default-k8s-different-port-20220601110654-6708": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.6
	I0601 11:19:57.232480  276679 machine.go:91] provisioned docker machine in 3.584963826s
	I0601 11:19:57.232486  276679 start.go:306] post-start starting for "default-k8s-different-port-20220601110654-6708" (driver="docker")
	I0601 11:19:57.232492  276679 start.go:316] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0601 11:19:57.232530  276679 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0601 11:19:57.232572  276679 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220601110654-6708
	I0601 11:19:57.265048  276679 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49442 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/default-k8s-different-port-20220601110654-6708/id_rsa Username:docker}
	I0601 11:19:57.351029  276679 ssh_runner.go:195] Run: cat /etc/os-release
	I0601 11:19:57.353646  276679 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0601 11:19:57.353677  276679 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0601 11:19:57.353687  276679 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0601 11:19:57.353695  276679 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0601 11:19:57.353706  276679 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/addons for local assets ...
	I0601 11:19:57.353765  276679 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files for local assets ...
	I0601 11:19:57.353858  276679 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files/etc/ssl/certs/67082.pem -> 67082.pem in /etc/ssl/certs
	I0601 11:19:57.353951  276679 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0601 11:19:57.360153  276679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files/etc/ssl/certs/67082.pem --> /etc/ssl/certs/67082.pem (1708 bytes)
	I0601 11:19:57.376881  276679 start.go:309] post-start completed in 144.384989ms
	I0601 11:19:57.376932  276679 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0601 11:19:57.376962  276679 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220601110654-6708
	I0601 11:19:57.411118  276679 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49442 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/default-k8s-different-port-20220601110654-6708/id_rsa Username:docker}
	I0601 11:19:57.496188  276679 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0601 11:19:57.499982  276679 fix.go:57] fixHost completed within 4.363400058s
	I0601 11:19:57.500005  276679 start.go:81] releasing machines lock for "default-k8s-different-port-20220601110654-6708", held for 4.363442227s
	I0601 11:19:57.500082  276679 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-different-port-20220601110654-6708
	I0601 11:19:57.532057  276679 ssh_runner.go:195] Run: systemctl --version
	I0601 11:19:57.532107  276679 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220601110654-6708
	I0601 11:19:57.532107  276679 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0601 11:19:57.532168  276679 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220601110654-6708
	I0601 11:19:57.567039  276679 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49442 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/default-k8s-different-port-20220601110654-6708/id_rsa Username:docker}
	I0601 11:19:57.567550  276679 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49442 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/default-k8s-different-port-20220601110654-6708/id_rsa Username:docker}
	I0601 11:19:57.677865  276679 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0601 11:19:57.688848  276679 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0601 11:19:57.697588  276679 docker.go:187] disabling docker service ...
	I0601 11:19:57.697632  276679 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0601 11:19:57.706476  276679 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0601 11:19:57.714826  276679 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0601 11:19:57.791919  276679 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0601 11:19:55.719582  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:19:58.219607  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:19:57.743387  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:20:00.243011  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:19:57.865357  276679 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0601 11:19:57.874183  276679 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0601 11:19:57.886120  276679 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*sandbox_image = .*$|sandbox_image = "k8s.gcr.io/pause:3.6"|' -i /etc/containerd/config.toml"
	I0601 11:19:57.893706  276679 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*restrict_oom_score_adj = .*$|restrict_oom_score_adj = false|' -i /etc/containerd/config.toml"
	I0601 11:19:57.901159  276679 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*SystemdCgroup = .*$|SystemdCgroup = false|' -i /etc/containerd/config.toml"
	I0601 11:19:57.908873  276679 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*conf_dir = .*$|conf_dir = "/etc/cni/net.mk"|' -i /etc/containerd/config.toml"
	I0601 11:19:57.916512  276679 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^# imports|imports = ["/etc/containerd/containerd.conf.d/02-containerd.conf"]|' -i /etc/containerd/config.toml"
	I0601 11:19:57.923712  276679 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc/containerd/containerd.conf.d && printf %!s(MISSING) "dmVyc2lvbiA9IDIK" | base64 -d | sudo tee /etc/containerd/containerd.conf.d/02-containerd.conf"
	I0601 11:19:57.935738  276679 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0601 11:19:57.941802  276679 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0601 11:19:57.947777  276679 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0601 11:19:58.021579  276679 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0601 11:19:58.089337  276679 start.go:447] Will wait 60s for socket path /run/containerd/containerd.sock
	I0601 11:19:58.089424  276679 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0601 11:19:58.092751  276679 start.go:468] Will wait 60s for crictl version
	I0601 11:19:58.092798  276679 ssh_runner.go:195] Run: sudo crictl version
	I0601 11:19:58.116611  276679 retry.go:31] will retry after 11.04660288s: Temporary Error: sudo crictl version: Process exited with status 1
	stdout:
	
	stderr:
	time="2022-06-01T11:19:58Z" level=fatal msg="getting the runtime version: rpc error: code = Unknown desc = server is not initialized yet"
	I0601 11:20:00.719494  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:20:03.219487  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:20:02.243060  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:20:04.243463  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:20:06.244423  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:20:05.719159  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:20:07.719735  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:20:09.163975  276679 ssh_runner.go:195] Run: sudo crictl version
	I0601 11:20:09.186613  276679 start.go:477] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.6.4
	RuntimeApiVersion:  v1alpha2
	I0601 11:20:09.186676  276679 ssh_runner.go:195] Run: containerd --version
	I0601 11:20:09.214385  276679 ssh_runner.go:195] Run: containerd --version
	I0601 11:20:09.243587  276679 out.go:177] * Preparing Kubernetes v1.23.6 on containerd 1.6.4 ...
	I0601 11:20:09.245245  276679 cli_runner.go:164] Run: docker network inspect default-k8s-different-port-20220601110654-6708 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0601 11:20:09.276501  276679 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0601 11:20:09.279800  276679 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0601 11:20:09.290992  276679 out.go:177]   - kubelet.cni-conf-dir=/etc/cni/net.mk
	I0601 11:20:08.742836  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:20:11.242670  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:20:09.292426  276679 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime containerd
	I0601 11:20:09.292493  276679 ssh_runner.go:195] Run: sudo crictl images --output json
	I0601 11:20:09.315170  276679 containerd.go:547] all images are preloaded for containerd runtime.
	I0601 11:20:09.315189  276679 containerd.go:461] Images already preloaded, skipping extraction
	I0601 11:20:09.315224  276679 ssh_runner.go:195] Run: sudo crictl images --output json
	I0601 11:20:09.338119  276679 containerd.go:547] all images are preloaded for containerd runtime.
	I0601 11:20:09.338137  276679 cache_images.go:84] Images are preloaded, skipping loading
	I0601 11:20:09.338184  276679 ssh_runner.go:195] Run: sudo crictl info
	I0601 11:20:09.360773  276679 cni.go:95] Creating CNI manager for ""
	I0601 11:20:09.360799  276679 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0601 11:20:09.360817  276679 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0601 11:20:09.360831  276679 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8444 KubernetesVersion:v1.23.6 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-different-port-20220601110654-6708 NodeName:default-k8s-different-port-20220601110654-6708 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.49.2
CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0601 11:20:09.360999  276679 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "default-k8s-different-port-20220601110654-6708"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.23.6
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0601 11:20:09.361105  276679 kubeadm.go:961] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.23.6/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cni-conf-dir=/etc/cni/net.mk --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=default-k8s-different-port-20220601110654-6708 --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.49.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.23.6 ClusterName:default-k8s-different-port-20220601110654-6708 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:}
	I0601 11:20:09.361162  276679 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.23.6
	I0601 11:20:09.368101  276679 binaries.go:44] Found k8s binaries, skipping transfer
	I0601 11:20:09.368169  276679 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0601 11:20:09.374382  276679 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (591 bytes)
	I0601 11:20:09.386282  276679 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0601 11:20:09.398188  276679 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2075 bytes)
	I0601 11:20:09.409736  276679 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0601 11:20:09.412458  276679 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0601 11:20:09.420789  276679 certs.go:54] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/default-k8s-different-port-20220601110654-6708 for IP: 192.168.49.2
	I0601 11:20:09.420897  276679 certs.go:182] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.key
	I0601 11:20:09.420940  276679 certs.go:182] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/proxy-client-ca.key
	I0601 11:20:09.421000  276679 certs.go:298] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/default-k8s-different-port-20220601110654-6708/client.key
	I0601 11:20:09.421053  276679 certs.go:298] skipping minikube signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/default-k8s-different-port-20220601110654-6708/apiserver.key.dd3b5fb2
	I0601 11:20:09.421088  276679 certs.go:298] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/default-k8s-different-port-20220601110654-6708/proxy-client.key
	I0601 11:20:09.421176  276679 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/6708.pem (1338 bytes)
	W0601 11:20:09.421205  276679 certs.go:384] ignoring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/6708_empty.pem, impossibly tiny 0 bytes
	I0601 11:20:09.421216  276679 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca-key.pem (1679 bytes)
	I0601 11:20:09.421244  276679 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca.pem (1082 bytes)
	I0601 11:20:09.421270  276679 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/cert.pem (1123 bytes)
	I0601 11:20:09.421298  276679 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/key.pem (1679 bytes)
	I0601 11:20:09.421334  276679 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files/etc/ssl/certs/67082.pem (1708 bytes)
	I0601 11:20:09.421917  276679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/default-k8s-different-port-20220601110654-6708/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0601 11:20:09.438490  276679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/default-k8s-different-port-20220601110654-6708/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0601 11:20:09.454711  276679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/default-k8s-different-port-20220601110654-6708/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0601 11:20:09.471469  276679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/default-k8s-different-port-20220601110654-6708/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0601 11:20:09.488271  276679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0601 11:20:09.504375  276679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0601 11:20:09.520473  276679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0601 11:20:09.536663  276679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0601 11:20:09.552725  276679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0601 11:20:09.568724  276679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/6708.pem --> /usr/share/ca-certificates/6708.pem (1338 bytes)
	I0601 11:20:09.584711  276679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files/etc/ssl/certs/67082.pem --> /usr/share/ca-certificates/67082.pem (1708 bytes)
	I0601 11:20:09.600406  276679 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0601 11:20:09.611814  276679 ssh_runner.go:195] Run: openssl version
	I0601 11:20:09.616280  276679 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0601 11:20:09.623058  276679 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0601 11:20:09.625881  276679 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Jun  1 10:20 /usr/share/ca-certificates/minikubeCA.pem
	I0601 11:20:09.625913  276679 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0601 11:20:09.630367  276679 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0601 11:20:09.636712  276679 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6708.pem && ln -fs /usr/share/ca-certificates/6708.pem /etc/ssl/certs/6708.pem"
	I0601 11:20:09.643407  276679 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6708.pem
	I0601 11:20:09.646316  276679 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Jun  1 10:24 /usr/share/ca-certificates/6708.pem
	I0601 11:20:09.646366  276679 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6708.pem
	I0601 11:20:09.650791  276679 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6708.pem /etc/ssl/certs/51391683.0"
	I0601 11:20:09.657126  276679 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/67082.pem && ln -fs /usr/share/ca-certificates/67082.pem /etc/ssl/certs/67082.pem"
	I0601 11:20:09.663990  276679 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/67082.pem
	I0601 11:20:09.666934  276679 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Jun  1 10:24 /usr/share/ca-certificates/67082.pem
	I0601 11:20:09.666966  276679 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/67082.pem
	I0601 11:20:09.671359  276679 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/67082.pem /etc/ssl/certs/3ec20f2e.0"
	I0601 11:20:09.677573  276679 kubeadm.go:395] StartCluster: {Name:default-k8s-different-port-20220601110654-6708 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:default-k8s-different-port-20220601110654-6708
Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8444 KubernetesVersion:v1.23.6 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.5.1@sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] S
tartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0601 11:20:09.677668  276679 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0601 11:20:09.677695  276679 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0601 11:20:09.700805  276679 cri.go:87] found id: "fec5fcb4aee4a111d94d448605edbdddd36bb33e1c2c06fad87be11f5157efdd"
	I0601 11:20:09.700825  276679 cri.go:87] found id: "313035e9674ffbf03bc7f81b4786fb43b0ffff7b5720ab0e88a6bb8f52d6087d"
	I0601 11:20:09.700835  276679 cri.go:87] found id: "f9746f111b56ac0244039e7b095ebf29999ccb20c1bf5e1ba484273bdb9e0d90"
	I0601 11:20:09.700844  276679 cri.go:87] found id: "0b15aeee4f5511a740a4f3f9ebbba9758b6b511b072da616c68a21c118c7790e"
	I0601 11:20:09.700853  276679 cri.go:87] found id: "627fd5c08820c1666f69a50ff3cc02a6eee73709048b46a0243143bf89dde787"
	I0601 11:20:09.700863  276679 cri.go:87] found id: "6ce85ae821e03fdd8bd07541eda8ea822ee62a21dc41c68bec58c5695d43fb44"
	I0601 11:20:09.700870  276679 cri.go:87] found id: ""
	I0601 11:20:09.700900  276679 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0601 11:20:09.711953  276679 cri.go:114] JSON = null
	W0601 11:20:09.711995  276679 kubeadm.go:402] unpause failed: list paused: list returned 0 containers, but ps returned 6
	I0601 11:20:09.712052  276679 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0601 11:20:09.718628  276679 kubeadm.go:410] found existing configuration files, will attempt cluster restart
	I0601 11:20:09.718649  276679 kubeadm.go:626] restartCluster start
	I0601 11:20:09.718687  276679 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0601 11:20:09.724992  276679 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0601 11:20:09.725567  276679 kubeconfig.go:116] verify returned: extract IP: "default-k8s-different-port-20220601110654-6708" does not appear in /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig
	I0601 11:20:09.725941  276679 kubeconfig.go:127] "default-k8s-different-port-20220601110654-6708" context is missing from /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig - will repair!
	I0601 11:20:09.726552  276679 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig: {Name:mkb0e16236e54fbc8651999f1dd70854c53de7db Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0601 11:20:09.727803  276679 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0601 11:20:09.734151  276679 api_server.go:165] Checking apiserver status ...
	I0601 11:20:09.734186  276679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 11:20:09.741699  276679 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 11:20:09.942065  276679 api_server.go:165] Checking apiserver status ...
	I0601 11:20:09.942125  276679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 11:20:09.950479  276679 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 11:20:10.142775  276679 api_server.go:165] Checking apiserver status ...
	I0601 11:20:10.142860  276679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 11:20:10.151184  276679 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 11:20:10.342428  276679 api_server.go:165] Checking apiserver status ...
	I0601 11:20:10.342511  276679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 11:20:10.350942  276679 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 11:20:10.542230  276679 api_server.go:165] Checking apiserver status ...
	I0601 11:20:10.542324  276679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 11:20:10.550731  276679 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 11:20:10.741765  276679 api_server.go:165] Checking apiserver status ...
	I0601 11:20:10.741840  276679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 11:20:10.750184  276679 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 11:20:10.942518  276679 api_server.go:165] Checking apiserver status ...
	I0601 11:20:10.942589  276679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 11:20:10.951137  276679 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 11:20:11.142442  276679 api_server.go:165] Checking apiserver status ...
	I0601 11:20:11.142519  276679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 11:20:11.151332  276679 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 11:20:11.342632  276679 api_server.go:165] Checking apiserver status ...
	I0601 11:20:11.342693  276679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 11:20:11.351149  276679 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 11:20:11.542423  276679 api_server.go:165] Checking apiserver status ...
	I0601 11:20:11.542483  276679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 11:20:11.550625  276679 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 11:20:11.741869  276679 api_server.go:165] Checking apiserver status ...
	I0601 11:20:11.741945  276679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 11:20:11.750554  276679 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 11:20:11.942776  276679 api_server.go:165] Checking apiserver status ...
	I0601 11:20:11.942855  276679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 11:20:11.951226  276679 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 11:20:12.142534  276679 api_server.go:165] Checking apiserver status ...
	I0601 11:20:12.142617  276679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 11:20:12.151065  276679 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 11:20:12.342354  276679 api_server.go:165] Checking apiserver status ...
	I0601 11:20:12.342429  276679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 11:20:12.350855  276679 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 11:20:12.542142  276679 api_server.go:165] Checking apiserver status ...
	I0601 11:20:12.542207  276679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 11:20:12.550615  276679 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 11:20:12.741824  276679 api_server.go:165] Checking apiserver status ...
	I0601 11:20:12.741894  276679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 11:20:12.750511  276679 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 11:20:12.750537  276679 api_server.go:165] Checking apiserver status ...
	I0601 11:20:12.750569  276679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 11:20:12.758099  276679 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 11:20:12.758124  276679 kubeadm.go:601] needs reconfigure: apiserver error: timed out waiting for the condition
	I0601 11:20:12.758131  276679 kubeadm.go:1092] stopping kube-system containers ...
	I0601 11:20:12.758146  276679 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I0601 11:20:12.758196  276679 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0601 11:20:12.782896  276679 cri.go:87] found id: "fec5fcb4aee4a111d94d448605edbdddd36bb33e1c2c06fad87be11f5157efdd"
	I0601 11:20:12.782918  276679 cri.go:87] found id: "313035e9674ffbf03bc7f81b4786fb43b0ffff7b5720ab0e88a6bb8f52d6087d"
	I0601 11:20:12.782924  276679 cri.go:87] found id: "f9746f111b56ac0244039e7b095ebf29999ccb20c1bf5e1ba484273bdb9e0d90"
	I0601 11:20:12.782931  276679 cri.go:87] found id: "0b15aeee4f5511a740a4f3f9ebbba9758b6b511b072da616c68a21c118c7790e"
	I0601 11:20:12.782936  276679 cri.go:87] found id: "627fd5c08820c1666f69a50ff3cc02a6eee73709048b46a0243143bf89dde787"
	I0601 11:20:12.782943  276679 cri.go:87] found id: "6ce85ae821e03fdd8bd07541eda8ea822ee62a21dc41c68bec58c5695d43fb44"
	I0601 11:20:12.782948  276679 cri.go:87] found id: ""
	I0601 11:20:12.782955  276679 cri.go:232] Stopping containers: [fec5fcb4aee4a111d94d448605edbdddd36bb33e1c2c06fad87be11f5157efdd 313035e9674ffbf03bc7f81b4786fb43b0ffff7b5720ab0e88a6bb8f52d6087d f9746f111b56ac0244039e7b095ebf29999ccb20c1bf5e1ba484273bdb9e0d90 0b15aeee4f5511a740a4f3f9ebbba9758b6b511b072da616c68a21c118c7790e 627fd5c08820c1666f69a50ff3cc02a6eee73709048b46a0243143bf89dde787 6ce85ae821e03fdd8bd07541eda8ea822ee62a21dc41c68bec58c5695d43fb44]
	I0601 11:20:12.782994  276679 ssh_runner.go:195] Run: which crictl
	I0601 11:20:12.785799  276679 ssh_runner.go:195] Run: sudo /usr/bin/crictl stop fec5fcb4aee4a111d94d448605edbdddd36bb33e1c2c06fad87be11f5157efdd 313035e9674ffbf03bc7f81b4786fb43b0ffff7b5720ab0e88a6bb8f52d6087d f9746f111b56ac0244039e7b095ebf29999ccb20c1bf5e1ba484273bdb9e0d90 0b15aeee4f5511a740a4f3f9ebbba9758b6b511b072da616c68a21c118c7790e 627fd5c08820c1666f69a50ff3cc02a6eee73709048b46a0243143bf89dde787 6ce85ae821e03fdd8bd07541eda8ea822ee62a21dc41c68bec58c5695d43fb44
	I0601 11:20:12.809504  276679 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0601 11:20:12.819061  276679 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0601 11:20:12.825913  276679 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5643 Jun  1 11:07 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5656 Jun  1 11:07 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2123 Jun  1 11:07 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5604 Jun  1 11:07 /etc/kubernetes/scheduler.conf
	
	I0601 11:20:12.825968  276679 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0601 11:20:10.219173  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:20:12.219371  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:20:13.243691  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:20:15.243798  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:20:12.832916  276679 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0601 11:20:12.839178  276679 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0601 11:20:12.845567  276679 kubeadm.go:166] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0601 11:20:12.845605  276679 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0601 11:20:12.851603  276679 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0601 11:20:12.857919  276679 kubeadm.go:166] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0601 11:20:12.857967  276679 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0601 11:20:12.864112  276679 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0601 11:20:12.870523  276679 kubeadm.go:703] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0601 11:20:12.870540  276679 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0601 11:20:12.912381  276679 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0601 11:20:13.433508  276679 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0601 11:20:13.566844  276679 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0601 11:20:13.617762  276679 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0601 11:20:13.686212  276679 api_server.go:51] waiting for apiserver process to appear ...
	I0601 11:20:13.686269  276679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:20:14.195273  276679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:20:14.695296  276679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:20:15.195457  276679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:20:15.695544  276679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:20:16.195542  276679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:20:16.695465  276679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:20:17.195333  276679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:20:17.694666  276679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:20:14.719337  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:20:17.218953  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:20:17.742741  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:20:20.244002  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:20:18.194692  276679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:20:18.694918  276679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:20:19.195623  276679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:20:19.695137  276679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:20:19.758656  276679 api_server.go:71] duration metric: took 6.072444993s to wait for apiserver process to appear ...
	I0601 11:20:19.758687  276679 api_server.go:87] waiting for apiserver healthz status ...
	I0601 11:20:19.758700  276679 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8444/healthz ...
	I0601 11:20:22.369047  276679 api_server.go:266] https://192.168.49.2:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0601 11:20:22.369078  276679 api_server.go:102] status: https://192.168.49.2:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0601 11:20:19.718920  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:20:21.719314  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:20:23.719804  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:20:22.869917  276679 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8444/healthz ...
	I0601 11:20:22.874561  276679 api_server.go:266] https://192.168.49.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0601 11:20:22.874589  276679 api_server.go:102] status: https://192.168.49.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0601 11:20:23.370203  276679 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8444/healthz ...
	I0601 11:20:23.375048  276679 api_server.go:266] https://192.168.49.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0601 11:20:23.375073  276679 api_server.go:102] status: https://192.168.49.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0601 11:20:23.869242  276679 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8444/healthz ...
	I0601 11:20:23.874012  276679 api_server.go:266] https://192.168.49.2:8444/healthz returned 200:
	ok
	I0601 11:20:23.879941  276679 api_server.go:140] control plane version: v1.23.6
	I0601 11:20:23.879963  276679 api_server.go:130] duration metric: took 4.121269797s to wait for apiserver health ...
	I0601 11:20:23.879972  276679 cni.go:95] Creating CNI manager for ""
	I0601 11:20:23.879977  276679 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0601 11:20:23.882052  276679 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0601 11:20:22.743507  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:20:25.242700  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:20:23.883460  276679 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0601 11:20:23.886921  276679 cni.go:189] applying CNI manifest using /var/lib/minikube/binaries/v1.23.6/kubectl ...
	I0601 11:20:23.886945  276679 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0601 11:20:23.899955  276679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0601 11:20:24.544438  276679 system_pods.go:43] waiting for kube-system pods to appear ...
	I0601 11:20:24.550979  276679 system_pods.go:59] 9 kube-system pods found
	I0601 11:20:24.551015  276679 system_pods.go:61] "coredns-64897985d-9gcj2" [28e98fca-a88b-422d-9f4b-797b18a8ff7a] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
	I0601 11:20:24.551025  276679 system_pods.go:61] "etcd-default-k8s-different-port-20220601110654-6708" [3005e651-1349-4d5e-b06f-e0fac3064ccf] Running
	I0601 11:20:24.551035  276679 system_pods.go:61] "kindnet-7fspq" [eefcd8e6-51e4-4d48-a420-93f4b47cf732] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0601 11:20:24.551042  276679 system_pods.go:61] "kube-apiserver-default-k8s-different-port-20220601110654-6708" [974fafdd-9176-4d97-acd7-9874d63b4987] Running
	I0601 11:20:24.551053  276679 system_pods.go:61] "kube-controller-manager-default-k8s-different-port-20220601110654-6708" [38b2c1a1-9a1a-4a1f-9fac-904e47d545be] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0601 11:20:24.551066  276679 system_pods.go:61] "kube-proxy-slzcl" [a1a6237f-6142-4e31-8bd4-72afd4f8a4c8] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0601 11:20:24.551083  276679 system_pods.go:61] "kube-scheduler-default-k8s-different-port-20220601110654-6708" [42ce6176-36e5-46bc-a443-19e4ca958785] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0601 11:20:24.551092  276679 system_pods.go:61] "metrics-server-b955d9d8-2k9wk" [fbc457b5-c359-4b84-abe5-d488874181f4] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
	I0601 11:20:24.551102  276679 system_pods.go:61] "storage-provisioner" [48086474-3417-47ff-970d-f7cf7806983b] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
	I0601 11:20:24.551112  276679 system_pods.go:74] duration metric: took 6.652373ms to wait for pod list to return data ...
	I0601 11:20:24.551126  276679 node_conditions.go:102] verifying NodePressure condition ...
	I0601 11:20:24.553819  276679 node_conditions.go:122] node storage ephemeral capacity is 304695084Ki
	I0601 11:20:24.553843  276679 node_conditions.go:123] node cpu capacity is 8
	I0601 11:20:24.553854  276679 node_conditions.go:105] duration metric: took 2.721044ms to run NodePressure ...
	I0601 11:20:24.553869  276679 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0601 11:20:24.680194  276679 kubeadm.go:762] waiting for restarted kubelet to initialise ...
	I0601 11:20:24.683686  276679 kubeadm.go:777] kubelet initialised
	I0601 11:20:24.683708  276679 kubeadm.go:778] duration metric: took 3.487172ms waiting for restarted kubelet to initialise ...
	I0601 11:20:24.683715  276679 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0601 11:20:24.689167  276679 pod_ready.go:78] waiting up to 4m0s for pod "coredns-64897985d-9gcj2" in "kube-system" namespace to be "Ready" ...
	I0601 11:20:26.694484  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:20:26.219205  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:20:28.219317  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:20:27.243486  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:20:29.742717  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:20:31.742800  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:20:28.695017  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:20:30.695110  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:20:32.695566  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:20:30.219646  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:20:32.719074  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:20:34.242643  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:20:36.243891  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:20:35.195305  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:20:37.197596  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:20:35.219473  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:20:37.719336  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:20:38.243963  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:20:40.743349  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:20:39.695270  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:20:42.195160  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:20:40.218932  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:20:42.719276  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:20:42.743398  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:20:45.243686  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:20:44.694661  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:20:46.695274  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:20:45.219350  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:20:47.719698  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:20:47.742813  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:20:50.244047  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:20:48.696514  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:20:51.195247  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:20:50.218967  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:20:52.219422  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:20:52.743394  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:20:54.743515  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:20:53.694370  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:20:55.694640  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:20:57.695171  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:20:54.719514  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:20:57.219033  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:20:57.242819  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:20:59.243739  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:20:59.739945  270029 pod_ready.go:81] duration metric: took 4m0.002166585s waiting for pod "coredns-64897985d-9dpfv" in "kube-system" namespace to be "Ready" ...
	E0601 11:20:59.739968  270029 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "coredns-64897985d-9dpfv" in "kube-system" namespace to be "Ready" (will not retry!)
	I0601 11:20:59.739995  270029 pod_ready.go:38] duration metric: took 4m0.008917217s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0601 11:20:59.740018  270029 kubeadm.go:630] restartCluster took 4m15.707393707s
	W0601 11:20:59.740131  270029 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0601 11:20:59.740156  270029 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I0601 11:21:01.430762  270029 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force": (1.690579833s)
	I0601 11:21:01.430838  270029 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0601 11:21:01.440364  270029 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0601 11:21:01.447145  270029 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0601 11:21:01.447194  270029 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0601 11:21:01.453852  270029 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0601 11:21:01.453891  270029 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0601 11:21:01.701224  270029 out.go:204]   - Generating certificates and keys ...
	I0601 11:21:00.194872  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:21:02.195437  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:20:59.219067  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:21:01.219719  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:21:03.719181  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:21:02.294583  270029 out.go:204]   - Booting up control plane ...
	I0601 11:21:04.694423  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:21:06.695087  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:21:05.719516  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:21:07.719966  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:21:09.195174  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:21:11.694583  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:21:10.218984  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:21:12.219075  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:21:14.337355  270029 out.go:204]   - Configuring RBAC rules ...
	I0601 11:21:14.750718  270029 cni.go:95] Creating CNI manager for ""
	I0601 11:21:14.750741  270029 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0601 11:21:14.752905  270029 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0601 11:21:14.754285  270029 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0601 11:21:14.758047  270029 cni.go:189] applying CNI manifest using /var/lib/minikube/binaries/v1.23.6/kubectl ...
	I0601 11:21:14.758065  270029 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0601 11:21:14.771201  270029 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0601 11:21:15.434277  270029 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0601 11:21:15.434380  270029 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:21:15.434381  270029 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl label nodes minikube.k8s.io/version=v1.26.0-beta.1 minikube.k8s.io/commit=4a356b2b7b41c6be3e1e342298908c27bb98ce92 minikube.k8s.io/name=embed-certs-20220601110327-6708 minikube.k8s.io/updated_at=2022_06_01T11_21_15_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:21:15.489119  270029 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:21:15.489208  270029 ops.go:34] apiserver oom_adj: -16
	I0601 11:21:16.079192  270029 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:21:16.579319  270029 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:21:14.194681  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:21:16.694557  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:21:14.219440  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:21:16.719363  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:21:17.079349  270029 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:21:17.579548  270029 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:21:18.079683  270029 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:21:18.579186  270029 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:21:19.079819  270029 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:21:19.579346  270029 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:21:20.079183  270029 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:21:20.579984  270029 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:21:21.079335  270029 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:21:21.579766  270029 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:21:18.694796  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:21:21.194627  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:21:19.218867  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:21:21.219185  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:21:23.719814  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:21:22.079321  270029 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:21:22.579993  270029 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:21:23.079856  270029 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:21:23.579743  270029 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:21:24.079256  270029 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:21:24.579276  270029 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:21:25.079828  270029 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:21:25.579763  270029 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:21:26.080068  270029 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:21:26.579388  270029 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:21:23.694527  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:21:25.694996  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:21:27.079269  270029 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:21:27.579729  270029 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:21:27.636171  270029 kubeadm.go:1045] duration metric: took 12.201851278s to wait for elevateKubeSystemPrivileges.
	I0601 11:21:27.636205  270029 kubeadm.go:397] StartCluster complete in 4m43.646757592s
	I0601 11:21:27.636227  270029 settings.go:142] acquiring lock: {Name:mk20a847233fa50399ab0a24280bffb8d8dbd41b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0601 11:21:27.636334  270029 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig
	I0601 11:21:27.637880  270029 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig: {Name:mkb0e16236e54fbc8651999f1dd70854c53de7db Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0601 11:21:28.157076  270029 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "embed-certs-20220601110327-6708" rescaled to 1
	I0601 11:21:28.157150  270029 start.go:208] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0601 11:21:28.157180  270029 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0601 11:21:28.159818  270029 out.go:177] * Verifying Kubernetes components...
	I0601 11:21:28.157185  270029 addons.go:415] enableAddons start: toEnable=map[dashboard:true metrics-server:true], additional=[]
	I0601 11:21:28.157406  270029 config.go:178] Loaded profile config "embed-certs-20220601110327-6708": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.6
	I0601 11:21:28.161484  270029 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0601 11:21:28.161496  270029 addons.go:65] Setting metrics-server=true in profile "embed-certs-20220601110327-6708"
	I0601 11:21:28.161511  270029 addons.go:153] Setting addon metrics-server=true in "embed-certs-20220601110327-6708"
	W0601 11:21:28.161523  270029 addons.go:165] addon metrics-server should already be in state true
	I0601 11:21:28.161537  270029 addons.go:65] Setting dashboard=true in profile "embed-certs-20220601110327-6708"
	I0601 11:21:28.161566  270029 addons.go:153] Setting addon dashboard=true in "embed-certs-20220601110327-6708"
	I0601 11:21:28.161573  270029 host.go:66] Checking if "embed-certs-20220601110327-6708" exists ...
	W0601 11:21:28.161579  270029 addons.go:165] addon dashboard should already be in state true
	I0601 11:21:28.161483  270029 addons.go:65] Setting storage-provisioner=true in profile "embed-certs-20220601110327-6708"
	I0601 11:21:28.161622  270029 addons.go:153] Setting addon storage-provisioner=true in "embed-certs-20220601110327-6708"
	W0601 11:21:28.161631  270029 addons.go:165] addon storage-provisioner should already be in state true
	I0601 11:21:28.161636  270029 host.go:66] Checking if "embed-certs-20220601110327-6708" exists ...
	I0601 11:21:28.161669  270029 host.go:66] Checking if "embed-certs-20220601110327-6708" exists ...
	I0601 11:21:28.161500  270029 addons.go:65] Setting default-storageclass=true in profile "embed-certs-20220601110327-6708"
	I0601 11:21:28.161709  270029 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-20220601110327-6708"
	I0601 11:21:28.161949  270029 cli_runner.go:164] Run: docker container inspect embed-certs-20220601110327-6708 --format={{.State.Status}}
	I0601 11:21:28.162094  270029 cli_runner.go:164] Run: docker container inspect embed-certs-20220601110327-6708 --format={{.State.Status}}
	I0601 11:21:28.162123  270029 cli_runner.go:164] Run: docker container inspect embed-certs-20220601110327-6708 --format={{.State.Status}}
	I0601 11:21:28.162229  270029 cli_runner.go:164] Run: docker container inspect embed-certs-20220601110327-6708 --format={{.State.Status}}
	I0601 11:21:28.209663  270029 out.go:177]   - Using image k8s.gcr.io/echoserver:1.4
	I0601 11:21:28.211523  270029 out.go:177]   - Using image kubernetesui/dashboard:v2.5.1
	I0601 11:21:28.213009  270029 addons.go:348] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0601 11:21:28.213030  270029 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0601 11:21:28.213079  270029 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220601110327-6708
	I0601 11:21:28.216922  270029 out.go:177]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I0601 11:21:28.218989  270029 addons.go:348] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0601 11:21:28.217201  270029 addons.go:153] Setting addon default-storageclass=true in "embed-certs-20220601110327-6708"
	W0601 11:21:28.219035  270029 addons.go:165] addon default-storageclass should already be in state true
	I0601 11:21:28.219075  270029 host.go:66] Checking if "embed-certs-20220601110327-6708" exists ...
	I0601 11:21:28.219579  270029 cli_runner.go:164] Run: docker container inspect embed-certs-20220601110327-6708 --format={{.State.Status}}
	I0601 11:21:28.219012  270029 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0601 11:21:28.219781  270029 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220601110327-6708
	I0601 11:21:28.236451  270029 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0601 11:21:26.218905  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:21:28.219209  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:21:28.238138  270029 addons.go:348] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0601 11:21:28.238163  270029 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0601 11:21:28.238217  270029 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220601110327-6708
	I0601 11:21:28.246850  270029 node_ready.go:35] waiting up to 6m0s for node "embed-certs-20220601110327-6708" to be "Ready" ...
	I0601 11:21:28.246885  270029 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0601 11:21:28.273680  270029 addons.go:348] installing /etc/kubernetes/addons/storageclass.yaml
	I0601 11:21:28.273707  270029 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0601 11:21:28.273761  270029 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220601110327-6708
	I0601 11:21:28.278846  270029 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49437 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/embed-certs-20220601110327-6708/id_rsa Username:docker}
	I0601 11:21:28.279320  270029 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49437 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/embed-certs-20220601110327-6708/id_rsa Username:docker}
	I0601 11:21:28.286384  270029 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49437 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/embed-certs-20220601110327-6708/id_rsa Username:docker}
	I0601 11:21:28.321729  270029 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49437 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/embed-certs-20220601110327-6708/id_rsa Username:docker}
	I0601 11:21:28.455756  270029 addons.go:348] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0601 11:21:28.455785  270029 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1849 bytes)
	I0601 11:21:28.466348  270029 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0601 11:21:28.469026  270029 addons.go:348] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0601 11:21:28.469067  270029 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0601 11:21:28.469486  270029 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0601 11:21:28.478076  270029 addons.go:348] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0601 11:21:28.478099  270029 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0601 11:21:28.487008  270029 addons.go:348] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0601 11:21:28.487036  270029 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0601 11:21:28.573106  270029 addons.go:348] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0601 11:21:28.573135  270029 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0601 11:21:28.574698  270029 start.go:806] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS
	I0601 11:21:28.577019  270029 addons.go:348] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0601 11:21:28.577042  270029 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0601 11:21:28.653936  270029 addons.go:348] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0601 11:21:28.653967  270029 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4196 bytes)
	I0601 11:21:28.658482  270029 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0601 11:21:28.671762  270029 addons.go:348] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0601 11:21:28.671808  270029 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0601 11:21:28.758424  270029 addons.go:348] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0601 11:21:28.758516  270029 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0601 11:21:28.776703  270029 addons.go:348] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0601 11:21:28.776735  270029 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0601 11:21:28.794636  270029 addons.go:348] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0601 11:21:28.794670  270029 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0601 11:21:28.959418  270029 addons.go:348] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0601 11:21:28.959449  270029 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0601 11:21:28.976465  270029 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0601 11:21:29.354605  270029 addons.go:386] Verifying addon metrics-server=true in "embed-certs-20220601110327-6708"
	I0601 11:21:29.699561  270029 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server, dashboard
	I0601 11:21:29.700807  270029 addons.go:417] enableAddons completed in 1.543631535s
	I0601 11:21:30.260215  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:21:28.196140  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:21:30.694688  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:21:32.695236  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:21:30.219534  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:21:32.219685  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:21:32.260412  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:21:34.760173  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:21:36.760442  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:21:35.195034  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:21:37.195304  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:21:34.718805  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:21:36.719108  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:21:38.760533  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:21:40.761060  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:21:39.694703  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:21:42.195994  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:21:39.219402  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:21:41.718982  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:21:43.719227  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:21:43.259684  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:21:45.260363  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:21:45.719329  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:21:47.719480  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:21:47.721505  254820 node_ready.go:38] duration metric: took 4m0.008123732s waiting for node "old-k8s-version-20220601105850-6708" to be "Ready" ...
	I0601 11:21:47.723918  254820 out.go:177] 
	W0601 11:21:47.725406  254820 out.go:239] X Exiting due to GUEST_START: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: timed out waiting for the condition
	W0601 11:21:47.725423  254820 out.go:239] * 
	W0601 11:21:47.726098  254820 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0601 11:21:47.728001  254820 out.go:177] 
	I0601 11:21:44.695306  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:21:47.194624  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:21:47.760960  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:21:50.260784  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:21:49.195368  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:21:51.694946  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:21:52.760281  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:21:55.259912  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:21:54.194912  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:21:56.195652  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:21:57.259956  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:21:59.759755  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:22:01.759853  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:21:58.694995  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:22:01.194431  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:22:03.760721  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:22:06.260069  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:22:03.195297  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:22:05.694312  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:22:07.695082  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:22:08.260739  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:22:10.760237  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:22:10.194760  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:22:12.194885  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:22:13.259813  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:22:15.260153  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:22:14.195226  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:22:16.694528  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:22:17.260859  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:22:19.759997  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:22:21.760654  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:22:18.695235  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:22:21.194694  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:22:24.260433  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:22:26.760129  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:22:23.197530  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:22:25.695229  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:22:28.760717  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:22:31.260368  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:22:28.194771  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:22:30.195026  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:22:32.694696  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:22:33.760112  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:22:35.760758  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:22:34.694930  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:22:36.695375  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:22:38.260723  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:22:40.760393  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:22:39.194795  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:22:41.694750  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:22:43.259823  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:22:45.260551  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:22:44.195389  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:22:46.695489  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:22:47.760311  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:22:49.760404  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:22:49.194395  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:22:51.195245  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:22:52.260594  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:22:54.760044  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:22:56.760073  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:22:53.195327  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:22:55.694893  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:22:58.760157  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:23:01.260267  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:22:58.194547  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:23:00.694762  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:23:03.260561  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:23:05.260780  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:23:03.195176  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:23:05.694698  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:23:07.695208  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:23:07.760513  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:23:10.260326  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:23:10.195039  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:23:12.695240  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:23:12.260674  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:23:14.260918  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:23:16.760064  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:23:15.195155  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:23:17.195241  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:23:18.760686  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:23:21.260676  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:23:19.694620  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:23:21.694667  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:23:23.760024  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:23:26.259746  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:23:24.194510  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:23:26.194546  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:23:28.260714  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:23:30.760541  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:23:28.194917  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:23:30.694766  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:23:33.260035  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:23:35.261060  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:23:33.195328  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:23:35.694682  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:23:37.695340  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:23:37.760144  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:23:40.260334  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:23:40.194751  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:23:42.194853  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:23:42.759808  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:23:44.759997  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:23:46.760285  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:23:44.695010  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:23:46.695526  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:23:48.760374  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:23:51.260999  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:23:49.194307  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:23:51.195053  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:23:53.760587  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:23:56.260172  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:23:53.195339  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:23:55.695153  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:23:58.759799  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:24:00.760631  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:23:58.194738  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:24:00.195407  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:24:02.695048  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:24:03.260687  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:24:05.260722  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:24:04.695337  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:24:07.194665  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:24:07.760567  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:24:10.260596  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:24:09.195069  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:24:11.694328  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:24:12.260967  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:24:14.759793  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:24:16.760292  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:24:14.194996  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:24:16.694542  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:24:18.760531  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:24:20.760689  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:24:18.694668  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:24:20.695051  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:24:23.195952  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:24:24.691928  276679 pod_ready.go:81] duration metric: took 4m0.002724634s waiting for pod "coredns-64897985d-9gcj2" in "kube-system" namespace to be "Ready" ...
	E0601 11:24:24.691955  276679 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "coredns-64897985d-9gcj2" in "kube-system" namespace to be "Ready" (will not retry!)
	I0601 11:24:24.691981  276679 pod_ready.go:38] duration metric: took 4m0.008258762s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0601 11:24:24.692005  276679 kubeadm.go:630] restartCluster took 4m14.973349857s
	W0601 11:24:24.692130  276679 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0601 11:24:24.692159  276679 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I0601 11:24:26.286416  276679 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force": (1.594228976s)
	I0601 11:24:26.286489  276679 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0601 11:24:26.296314  276679 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0601 11:24:26.303059  276679 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0601 11:24:26.303116  276679 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0601 11:24:26.309917  276679 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0601 11:24:26.309957  276679 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0601 11:24:22.761011  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:24:25.261206  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:24:26.556270  276679 out.go:204]   - Generating certificates and keys ...
	I0601 11:24:27.302083  276679 out.go:204]   - Booting up control plane ...
	I0601 11:24:27.261441  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:24:29.759885  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:24:32.260145  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:24:34.260990  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:24:36.760710  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:24:38.840585  276679 out.go:204]   - Configuring RBAC rules ...
	I0601 11:24:39.253770  276679 cni.go:95] Creating CNI manager for ""
	I0601 11:24:39.253791  276679 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0601 11:24:39.255739  276679 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0601 11:24:39.259837  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:24:41.260124  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:24:39.257207  276679 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0601 11:24:39.261207  276679 cni.go:189] applying CNI manifest using /var/lib/minikube/binaries/v1.23.6/kubectl ...
	I0601 11:24:39.261228  276679 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0601 11:24:39.273744  276679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0601 11:24:39.861493  276679 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0601 11:24:39.861573  276679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:24:39.861574  276679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl label nodes minikube.k8s.io/version=v1.26.0-beta.1 minikube.k8s.io/commit=4a356b2b7b41c6be3e1e342298908c27bb98ce92 minikube.k8s.io/name=default-k8s-different-port-20220601110654-6708 minikube.k8s.io/updated_at=2022_06_01T11_24_39_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:24:39.914842  276679 ops.go:34] apiserver oom_adj: -16
	I0601 11:24:39.914913  276679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:24:40.498901  276679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:24:40.998931  276679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:24:41.499031  276679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:24:41.998593  276679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:24:42.499160  276679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:24:43.260473  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:24:45.760870  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:24:42.998966  276679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:24:43.498638  276679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:24:43.998319  276679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:24:44.498531  276679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:24:44.998678  276679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:24:45.499193  276679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:24:45.998418  276679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:24:46.498985  276679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:24:46.998941  276679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:24:47.498945  276679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:24:48.260450  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:24:50.260933  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:24:47.999272  276679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:24:48.498439  276679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:24:48.999292  276679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:24:49.499272  276679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:24:49.998339  276679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:24:50.498332  276679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:24:50.999106  276679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:24:51.499296  276679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:24:51.998980  276679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:24:52.498623  276679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:24:52.998371  276679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:24:53.498515  276679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:24:53.594790  276679 kubeadm.go:1045] duration metric: took 13.733266896s to wait for elevateKubeSystemPrivileges.
	I0601 11:24:53.594820  276679 kubeadm.go:397] StartCluster complete in 4m43.917251881s
	I0601 11:24:53.594841  276679 settings.go:142] acquiring lock: {Name:mk20a847233fa50399ab0a24280bffb8d8dbd41b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0601 11:24:53.594938  276679 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig
	I0601 11:24:53.596907  276679 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig: {Name:mkb0e16236e54fbc8651999f1dd70854c53de7db Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0601 11:24:54.111475  276679 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "default-k8s-different-port-20220601110654-6708" rescaled to 1
	I0601 11:24:54.111547  276679 start.go:208] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8444 KubernetesVersion:v1.23.6 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0601 11:24:54.113711  276679 out.go:177] * Verifying Kubernetes components...
	I0601 11:24:54.111604  276679 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0601 11:24:54.111644  276679 addons.go:415] enableAddons start: toEnable=map[dashboard:true metrics-server:true], additional=[]
	I0601 11:24:54.111802  276679 config.go:178] Loaded profile config "default-k8s-different-port-20220601110654-6708": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.6
	I0601 11:24:54.115020  276679 addons.go:65] Setting storage-provisioner=true in profile "default-k8s-different-port-20220601110654-6708"
	I0601 11:24:54.115035  276679 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0601 11:24:54.115035  276679 addons.go:65] Setting metrics-server=true in profile "default-k8s-different-port-20220601110654-6708"
	I0601 11:24:54.115048  276679 addons.go:153] Setting addon storage-provisioner=true in "default-k8s-different-port-20220601110654-6708"
	I0601 11:24:54.115055  276679 addons.go:153] Setting addon metrics-server=true in "default-k8s-different-port-20220601110654-6708"
	W0601 11:24:54.115057  276679 addons.go:165] addon storage-provisioner should already be in state true
	W0601 11:24:54.115064  276679 addons.go:165] addon metrics-server should already be in state true
	I0601 11:24:54.115034  276679 addons.go:65] Setting default-storageclass=true in profile "default-k8s-different-port-20220601110654-6708"
	I0601 11:24:54.115103  276679 host.go:66] Checking if "default-k8s-different-port-20220601110654-6708" exists ...
	I0601 11:24:54.115109  276679 host.go:66] Checking if "default-k8s-different-port-20220601110654-6708" exists ...
	I0601 11:24:54.115112  276679 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-different-port-20220601110654-6708"
	I0601 11:24:54.115037  276679 addons.go:65] Setting dashboard=true in profile "default-k8s-different-port-20220601110654-6708"
	I0601 11:24:54.115134  276679 addons.go:153] Setting addon dashboard=true in "default-k8s-different-port-20220601110654-6708"
	W0601 11:24:54.115144  276679 addons.go:165] addon dashboard should already be in state true
	I0601 11:24:54.115176  276679 host.go:66] Checking if "default-k8s-different-port-20220601110654-6708" exists ...
	I0601 11:24:54.115416  276679 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220601110654-6708 --format={{.State.Status}}
	I0601 11:24:54.115596  276679 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220601110654-6708 --format={{.State.Status}}
	I0601 11:24:54.115611  276679 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220601110654-6708 --format={{.State.Status}}
	I0601 11:24:54.115615  276679 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220601110654-6708 --format={{.State.Status}}
	I0601 11:24:54.129176  276679 node_ready.go:35] waiting up to 6m0s for node "default-k8s-different-port-20220601110654-6708" to be "Ready" ...
	I0601 11:24:54.168194  276679 out.go:177]   - Using image kubernetesui/dashboard:v2.5.1
	I0601 11:24:54.169714  276679 out.go:177]   - Using image k8s.gcr.io/echoserver:1.4
	I0601 11:24:54.171144  276679 addons.go:348] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0601 11:24:54.170891  276679 addons.go:153] Setting addon default-storageclass=true in "default-k8s-different-port-20220601110654-6708"
	W0601 11:24:54.171181  276679 addons.go:165] addon default-storageclass should already be in state true
	I0601 11:24:54.171211  276679 host.go:66] Checking if "default-k8s-different-port-20220601110654-6708" exists ...
	I0601 11:24:54.171167  276679 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0601 11:24:54.171329  276679 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220601110654-6708
	I0601 11:24:54.171684  276679 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220601110654-6708 --format={{.State.Status}}
	I0601 11:24:54.176157  276679 out.go:177]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I0601 11:24:54.177770  276679 addons.go:348] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0601 11:24:54.177796  276679 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0601 11:24:54.179131  276679 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0601 11:24:54.177859  276679 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220601110654-6708
	I0601 11:24:54.180787  276679 addons.go:348] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0601 11:24:54.180809  276679 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0601 11:24:54.180855  276679 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220601110654-6708
	I0601 11:24:54.233206  276679 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49442 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/default-k8s-different-port-20220601110654-6708/id_rsa Username:docker}
	I0601 11:24:54.240234  276679 addons.go:348] installing /etc/kubernetes/addons/storageclass.yaml
	I0601 11:24:54.240263  276679 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0601 11:24:54.240311  276679 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220601110654-6708
	I0601 11:24:54.240743  276679 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49442 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/default-k8s-different-port-20220601110654-6708/id_rsa Username:docker}
	I0601 11:24:54.242497  276679 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49442 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/default-k8s-different-port-20220601110654-6708/id_rsa Username:docker}
	I0601 11:24:54.255476  276679 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0601 11:24:54.289597  276679 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49442 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/default-k8s-different-port-20220601110654-6708/id_rsa Username:docker}
	I0601 11:24:54.510589  276679 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0601 11:24:54.510747  276679 addons.go:348] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0601 11:24:54.510770  276679 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1849 bytes)
	I0601 11:24:54.556919  276679 addons.go:348] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0601 11:24:54.556950  276679 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0601 11:24:54.566012  276679 addons.go:348] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0601 11:24:54.566042  276679 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0601 11:24:54.569528  276679 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0601 11:24:54.576575  276679 addons.go:348] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0601 11:24:54.576625  276679 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0601 11:24:54.654525  276679 addons.go:348] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0601 11:24:54.654551  276679 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0601 11:24:54.655296  276679 addons.go:348] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0601 11:24:54.655319  276679 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0601 11:24:54.661290  276679 start.go:806] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS
	I0601 11:24:54.671592  276679 addons.go:348] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0601 11:24:54.671621  276679 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4196 bytes)
	I0601 11:24:54.673696  276679 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0601 11:24:54.687107  276679 addons.go:348] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0601 11:24:54.687133  276679 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0601 11:24:54.768961  276679 addons.go:348] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0601 11:24:54.768989  276679 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0601 11:24:54.854363  276679 addons.go:348] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0601 11:24:54.854399  276679 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0601 11:24:54.870735  276679 addons.go:348] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0601 11:24:54.870762  276679 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0601 11:24:54.888031  276679 addons.go:348] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0601 11:24:54.888063  276679 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0601 11:24:54.967082  276679 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0601 11:24:55.273650  276679 addons.go:386] Verifying addon metrics-server=true in "default-k8s-different-port-20220601110654-6708"
	I0601 11:24:55.661065  276679 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I0601 11:24:52.261071  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:24:54.261578  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:24:56.760078  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:24:55.662561  276679 addons.go:417] enableAddons completed in 1.550935677s
	I0601 11:24:56.136034  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:24:58.760245  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:25:00.760344  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:24:58.136131  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:25:00.136759  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:25:02.636409  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:25:03.260144  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:25:05.260531  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:25:05.136779  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:25:07.635969  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:25:07.760027  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:25:09.760904  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:25:10.136336  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:25:12.636564  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:25:12.260100  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:25:14.759992  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:25:16.760260  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:25:14.636694  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:25:17.137058  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:25:19.260136  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:25:21.260700  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:25:19.636331  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:25:22.136010  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:25:23.760875  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:25:26.261082  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:25:24.136501  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:25:26.636646  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:25:28.263320  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:25:28.263343  270029 node_ready.go:38] duration metric: took 4m0.016466534s waiting for node "embed-certs-20220601110327-6708" to be "Ready" ...
	I0601 11:25:28.265930  270029 out.go:177] 
	W0601 11:25:28.267524  270029 out.go:239] X Exiting due to GUEST_START: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: timed out waiting for the condition
	W0601 11:25:28.267549  270029 out.go:239] * 
	W0601 11:25:28.268404  270029 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0601 11:25:28.269962  270029 out.go:177] 
	I0601 11:25:28.637161  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:25:31.135894  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:25:33.136655  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:25:35.635923  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:25:37.636131  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:25:39.636319  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:25:42.136004  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:25:44.136847  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:25:46.636704  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:25:49.136203  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:25:51.136808  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:25:53.636402  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:25:56.135580  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:25:58.135934  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:26:00.136698  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:26:02.136807  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:26:04.636360  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:26:07.136003  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:26:09.136403  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:26:11.636023  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:26:13.636284  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:26:16.136059  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:26:18.635976  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:26:20.636471  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:26:23.136420  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:26:25.635898  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:26:27.636092  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:26:29.636223  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:26:32.135814  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:26:34.136208  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:26:36.136320  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:26:38.635965  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:26:41.136884  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:26:43.636083  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:26:46.136237  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:26:48.635722  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:26:51.135780  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:26:53.136057  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:26:55.136925  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:26:57.636578  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:27:00.135989  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:27:02.136086  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:27:04.136153  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:27:06.635746  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:27:08.636054  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:27:10.636582  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:27:13.136118  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:27:15.137042  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:27:17.636192  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:27:20.136181  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:27:22.136256  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:27:24.136756  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:27:26.636114  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:27:28.636414  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:27:31.136248  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:27:33.136847  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:27:35.635813  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:27:37.636126  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:27:39.636375  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:27:42.136175  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:27:44.636682  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:27:47.135843  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:27:49.136252  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:27:51.137073  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:27:53.636035  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:27:55.636279  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:27:58.136943  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:28:00.635664  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:28:02.636502  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:28:04.638145  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:28:07.136842  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:28:09.636372  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:28:12.136048  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:28:14.136569  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:28:16.635705  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:28:18.636532  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:28:21.136177  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:28:23.636753  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:28:26.136524  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:28:28.635691  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:28:30.636561  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:28:33.136478  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:28:35.636196  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:28:38.137078  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:28:40.636164  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:28:42.636749  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:28:45.136427  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:28:47.636180  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:28:49.636861  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:28:52.136563  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:28:54.136714  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:28:54.138823  276679 node_ready.go:38] duration metric: took 4m0.0096115s waiting for node "default-k8s-different-port-20220601110654-6708" to be "Ready" ...
	I0601 11:28:54.141397  276679 out.go:177] 
	W0601 11:28:54.143025  276679 out.go:239] X Exiting due to GUEST_START: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: timed out waiting for the condition
	W0601 11:28:54.143041  276679 out.go:239] * 
	W0601 11:28:54.143750  276679 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0601 11:28:54.145729  276679 out.go:177] 
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID
	c1c98bb1bf714       6de166512aa22       50 seconds ago      Running             kindnet-cni               4                   5034272feeb28
	b9c21a59dc97a       6de166512aa22       4 minutes ago       Exited              kindnet-cni               3                   5034272feeb28
	2024cc29941ea       4c03754524064       13 minutes ago      Running             kube-proxy                0                   daf24f5fe6815
	66ae64154eec2       595f327f224a4       13 minutes ago      Running             kube-scheduler            2                   2407fda9d1316
	6a41e96934391       25f8c7f3da61c       13 minutes ago      Running             etcd                      2                   99351c41f0535
	886985a42629e       8fa62c12256df       13 minutes ago      Running             kube-apiserver            2                   0116dd4e67c47
	419ab1e52af79       df7b72818ad2e       13 minutes ago      Running             kube-controller-manager   2                   2380a5b9d67cf
	
	* 
	* ==> containerd <==
	* -- Logs begin at Wed 2022-06-01 11:16:28 UTC, end at Wed 2022-06-01 11:34:31 UTC. --
	Jun 01 11:26:50 embed-certs-20220601110327-6708 containerd[390]: time="2022-06-01T11:26:50.366333181Z" level=info msg="RemoveContainer for \"847a11a10e8fea029dd23fac48e064c61a972e94ec5d262ecd609d1320b886cc\" returns successfully"
	Jun 01 11:27:04 embed-certs-20220601110327-6708 containerd[390]: time="2022-06-01T11:27:04.765509974Z" level=info msg="CreateContainer within sandbox \"5034272feeb28ea173b9daa7ead31b2fb82af31b8ab6deaeb6c410cb9ac82b6f\" for container &ContainerMetadata{Name:kindnet-cni,Attempt:2,}"
	Jun 01 11:27:04 embed-certs-20220601110327-6708 containerd[390]: time="2022-06-01T11:27:04.778398520Z" level=info msg="CreateContainer within sandbox \"5034272feeb28ea173b9daa7ead31b2fb82af31b8ab6deaeb6c410cb9ac82b6f\" for &ContainerMetadata{Name:kindnet-cni,Attempt:2,} returns container id \"f851ead84bcd0789b6d53e3ea49d991602dc65ed5246bc44332dd1ed2cd34458\""
	Jun 01 11:27:04 embed-certs-20220601110327-6708 containerd[390]: time="2022-06-01T11:27:04.778898291Z" level=info msg="StartContainer for \"f851ead84bcd0789b6d53e3ea49d991602dc65ed5246bc44332dd1ed2cd34458\""
	Jun 01 11:27:04 embed-certs-20220601110327-6708 containerd[390]: time="2022-06-01T11:27:04.874198026Z" level=info msg="StartContainer for \"f851ead84bcd0789b6d53e3ea49d991602dc65ed5246bc44332dd1ed2cd34458\" returns successfully"
	Jun 01 11:29:45 embed-certs-20220601110327-6708 containerd[390]: time="2022-06-01T11:29:45.094073207Z" level=info msg="shim disconnected" id=f851ead84bcd0789b6d53e3ea49d991602dc65ed5246bc44332dd1ed2cd34458
	Jun 01 11:29:45 embed-certs-20220601110327-6708 containerd[390]: time="2022-06-01T11:29:45.094137471Z" level=warning msg="cleaning up after shim disconnected" id=f851ead84bcd0789b6d53e3ea49d991602dc65ed5246bc44332dd1ed2cd34458 namespace=k8s.io
	Jun 01 11:29:45 embed-certs-20220601110327-6708 containerd[390]: time="2022-06-01T11:29:45.094148243Z" level=info msg="cleaning up dead shim"
	Jun 01 11:29:45 embed-certs-20220601110327-6708 containerd[390]: time="2022-06-01T11:29:45.102958302Z" level=warning msg="cleanup warnings time=\"2022-06-01T11:29:45Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4203 runtime=io.containerd.runc.v2\n"
	Jun 01 11:29:45 embed-certs-20220601110327-6708 containerd[390]: time="2022-06-01T11:29:45.680502023Z" level=info msg="RemoveContainer for \"b948d023f8980c480b53fdabb3e706e5a9dd36d3ec3306923b50ef0f4f9e9a40\""
	Jun 01 11:29:45 embed-certs-20220601110327-6708 containerd[390]: time="2022-06-01T11:29:45.684951584Z" level=info msg="RemoveContainer for \"b948d023f8980c480b53fdabb3e706e5a9dd36d3ec3306923b50ef0f4f9e9a40\" returns successfully"
	Jun 01 11:30:09 embed-certs-20220601110327-6708 containerd[390]: time="2022-06-01T11:30:09.764701443Z" level=info msg="CreateContainer within sandbox \"5034272feeb28ea173b9daa7ead31b2fb82af31b8ab6deaeb6c410cb9ac82b6f\" for container &ContainerMetadata{Name:kindnet-cni,Attempt:3,}"
	Jun 01 11:30:09 embed-certs-20220601110327-6708 containerd[390]: time="2022-06-01T11:30:09.776194885Z" level=info msg="CreateContainer within sandbox \"5034272feeb28ea173b9daa7ead31b2fb82af31b8ab6deaeb6c410cb9ac82b6f\" for &ContainerMetadata{Name:kindnet-cni,Attempt:3,} returns container id \"b9c21a59dc97a27125389b613840a7ce232fbc8f3c2d532115d660b405fc2761\""
	Jun 01 11:30:09 embed-certs-20220601110327-6708 containerd[390]: time="2022-06-01T11:30:09.776670365Z" level=info msg="StartContainer for \"b9c21a59dc97a27125389b613840a7ce232fbc8f3c2d532115d660b405fc2761\""
	Jun 01 11:30:09 embed-certs-20220601110327-6708 containerd[390]: time="2022-06-01T11:30:09.867855469Z" level=info msg="StartContainer for \"b9c21a59dc97a27125389b613840a7ce232fbc8f3c2d532115d660b405fc2761\" returns successfully"
	Jun 01 11:32:50 embed-certs-20220601110327-6708 containerd[390]: time="2022-06-01T11:32:50.088915129Z" level=info msg="shim disconnected" id=b9c21a59dc97a27125389b613840a7ce232fbc8f3c2d532115d660b405fc2761
	Jun 01 11:32:50 embed-certs-20220601110327-6708 containerd[390]: time="2022-06-01T11:32:50.088971315Z" level=warning msg="cleaning up after shim disconnected" id=b9c21a59dc97a27125389b613840a7ce232fbc8f3c2d532115d660b405fc2761 namespace=k8s.io
	Jun 01 11:32:50 embed-certs-20220601110327-6708 containerd[390]: time="2022-06-01T11:32:50.088986471Z" level=info msg="cleaning up dead shim"
	Jun 01 11:32:50 embed-certs-20220601110327-6708 containerd[390]: time="2022-06-01T11:32:50.097759998Z" level=warning msg="cleanup warnings time=\"2022-06-01T11:32:50Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4304 runtime=io.containerd.runc.v2\n"
	Jun 01 11:32:50 embed-certs-20220601110327-6708 containerd[390]: time="2022-06-01T11:32:50.996245799Z" level=info msg="RemoveContainer for \"f851ead84bcd0789b6d53e3ea49d991602dc65ed5246bc44332dd1ed2cd34458\""
	Jun 01 11:32:51 embed-certs-20220601110327-6708 containerd[390]: time="2022-06-01T11:32:51.001467349Z" level=info msg="RemoveContainer for \"f851ead84bcd0789b6d53e3ea49d991602dc65ed5246bc44332dd1ed2cd34458\" returns successfully"
	Jun 01 11:33:40 embed-certs-20220601110327-6708 containerd[390]: time="2022-06-01T11:33:40.764942366Z" level=info msg="CreateContainer within sandbox \"5034272feeb28ea173b9daa7ead31b2fb82af31b8ab6deaeb6c410cb9ac82b6f\" for container &ContainerMetadata{Name:kindnet-cni,Attempt:4,}"
	Jun 01 11:33:40 embed-certs-20220601110327-6708 containerd[390]: time="2022-06-01T11:33:40.776575966Z" level=info msg="CreateContainer within sandbox \"5034272feeb28ea173b9daa7ead31b2fb82af31b8ab6deaeb6c410cb9ac82b6f\" for &ContainerMetadata{Name:kindnet-cni,Attempt:4,} returns container id \"c1c98bb1bf7140d2a2012ee102402922088dd2d82c91e06134486992e7cf151c\""
	Jun 01 11:33:40 embed-certs-20220601110327-6708 containerd[390]: time="2022-06-01T11:33:40.777137351Z" level=info msg="StartContainer for \"c1c98bb1bf7140d2a2012ee102402922088dd2d82c91e06134486992e7cf151c\""
	Jun 01 11:33:40 embed-certs-20220601110327-6708 containerd[390]: time="2022-06-01T11:33:40.867517719Z" level=info msg="StartContainer for \"c1c98bb1bf7140d2a2012ee102402922088dd2d82c91e06134486992e7cf151c\" returns successfully"
	
	* 
	* ==> describe nodes <==
	* Name:               embed-certs-20220601110327-6708
	Roles:              control-plane,master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-20220601110327-6708
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=4a356b2b7b41c6be3e1e342298908c27bb98ce92
	                    minikube.k8s.io/name=embed-certs-20220601110327-6708
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2022_06_01T11_21_15_0700
	                    minikube.k8s.io/version=v1.26.0-beta.1
	                    node-role.kubernetes.io/control-plane=
	                    node-role.kubernetes.io/master=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 01 Jun 2022 11:21:11 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-20220601110327-6708
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 01 Jun 2022 11:34:26 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 01 Jun 2022 11:31:42 +0000   Wed, 01 Jun 2022 11:21:09 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 01 Jun 2022 11:31:42 +0000   Wed, 01 Jun 2022 11:21:09 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 01 Jun 2022 11:31:42 +0000   Wed, 01 Jun 2022 11:21:09 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Wed, 01 Jun 2022 11:31:42 +0000   Wed, 01 Jun 2022 11:21:09 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    embed-certs-20220601110327-6708
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304695084Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32873824Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304695084Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32873824Ki
	  pods:               110
	System Info:
	  Machine ID:                 e0d7477b601740b2a7c32c13851e505c
	  System UUID:                d600b159-ea34-4ea3-ab62-e86c595f06ef
	  Boot ID:                    6ec46557-2d76-4d83-8353-3ee04fd961c4
	  Kernel Version:             5.13.0-1027-gcp
	  OS Image:                   Ubuntu 20.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.6.4
	  Kubelet Version:            v1.23.6
	  Kube-Proxy Version:         v1.23.6
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                       ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-embed-certs-20220601110327-6708                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (0%!)(MISSING)       0 (0%!)(MISSING)         13m
	  kube-system                 kindnet-xnhg5                                              100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      13m
	  kube-system                 kube-apiserver-embed-certs-20220601110327-6708             250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-controller-manager-embed-certs-20220601110327-6708    200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-proxy-tssbf                                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-scheduler-embed-certs-20220601110327-6708             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%!)(MISSING)   100m (1%!)(MISSING)
	  memory             150Mi (0%!)(MISSING)  50Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From        Message
	  ----    ------                   ----  ----        -------
	  Normal  Starting                 13m   kube-proxy  
	  Normal  Starting                 13m   kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  13m   kubelet     Node embed-certs-20220601110327-6708 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m   kubelet     Node embed-certs-20220601110327-6708 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m   kubelet     Node embed-certs-20220601110327-6708 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  13m   kubelet     Updated Node Allocatable limit across pods
	
	* 
	* ==> dmesg <==
	* [Jun 1 10:57] IPv4: martian source 10.244.0.2 from 10.244.0.2, on dev bridge
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff ce 23 50 3b c9 fd 08 06
	[  +0.595787] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ce 23 50 3b c9 fd 08 06
	[  +0.586201] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 66 8a d2 ee 2a 9a 08 06
	[Jun 1 10:58] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 9a c7 83 3b 8c 1d 08 06
	[  +0.000302] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 66 8a d2 ee 2a 9a 08 06
	[ +12.725641] IPv4: martian source 10.244.0.2 from 10.244.0.2, on dev bridge
	[  +0.000013] ll header: 00000000: ff ff ff ff ff ff 3a 82 da d0 8c 8d 08 06
	[  +0.906380] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 3a 82 da d0 8c 8d 08 06
	[  +0.302806] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff aa d1 05 1a ef 66 08 06
	[ +13.606547] IPv4: martian source 10.85.0.2 from 10.85.0.2, on dev cni0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 7e ad f3 21 27 5b 08 06
	[  +8.626375] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff f6 c1 ef be 01 dd 08 06
	[  +0.000348] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff aa d1 05 1a ef 66 08 06
	[  +8.278909] IPv4: martian source 10.85.0.2 from 10.85.0.2, on dev cni0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 72 fb cd eb 09 11 08 06
	[Jun 1 11:01] IPv4: martian destination 127.0.0.11 from 10.244.0.2, dev vethb04b9442
	
	* 
	* ==> etcd [6a41e969343918f9600ad1703d19a3220b0d2c2fb0c45c8588a9d65792ba9163] <==
	* {"level":"info","ts":"2022-06-01T11:21:08.678Z","caller":"embed/etcd.go:687","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2022-06-01T11:21:08.678Z","caller":"embed/etcd.go:580","msg":"serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2022-06-01T11:21:08.678Z","caller":"embed/etcd.go:552","msg":"cmux::serve","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2022-06-01T11:21:08.678Z","caller":"embed/etcd.go:276","msg":"now serving peer/client/metrics","local-member-id":"ea7e25599daad906","initial-advertise-peer-urls":["https://192.168.76.2:2380"],"listen-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.76.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2022-06-01T11:21:08.678Z","caller":"embed/etcd.go:762","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2022-06-01T11:21:09.166Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 is starting a new election at term 1"}
	{"level":"info","ts":"2022-06-01T11:21:09.166Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became pre-candidate at term 1"}
	{"level":"info","ts":"2022-06-01T11:21:09.166Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 1"}
	{"level":"info","ts":"2022-06-01T11:21:09.166Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became candidate at term 2"}
	{"level":"info","ts":"2022-06-01T11:21:09.166Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2022-06-01T11:21:09.166Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became leader at term 2"}
	{"level":"info","ts":"2022-06-01T11:21:09.166Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2022-06-01T11:21:09.167Z","caller":"etcdserver/server.go:2476","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2022-06-01T11:21:09.168Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","cluster-version":"3.5"}
	{"level":"info","ts":"2022-06-01T11:21:09.168Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2022-06-01T11:21:09.168Z","caller":"etcdserver/server.go:2500","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2022-06-01T11:21:09.168Z","caller":"etcdserver/server.go:2027","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:embed-certs-20220601110327-6708 ClientURLs:[https://192.168.76.2:2379]}","request-path":"/0/members/ea7e25599daad906/attributes","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2022-06-01T11:21:09.168Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-06-01T11:21:09.168Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-06-01T11:21:09.168Z","caller":"etcdmain/main.go:47","msg":"notifying init daemon"}
	{"level":"info","ts":"2022-06-01T11:21:09.168Z","caller":"etcdmain/main.go:53","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2022-06-01T11:21:09.169Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2022-06-01T11:21:09.169Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.76.2:2379"}
	{"level":"info","ts":"2022-06-01T11:31:09.493Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":644}
	{"level":"info","ts":"2022-06-01T11:31:09.494Z","caller":"mvcc/kvstore_compaction.go:57","msg":"finished scheduled compaction","compact-revision":644,"took":"585.452µs"}
	
	* 
	* ==> kernel <==
	*  11:34:31 up  1:17,  0 users,  load average: 0.09, 0.40, 1.12
	Linux embed-certs-20220601110327-6708 5.13.0-1027-gcp #32~20.04.1-Ubuntu SMP Thu May 26 10:53:08 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.4 LTS"
	
	* 
	* ==> kube-apiserver [886985a42629e2f2581e6d58eba9be2a3e9a0976e634d67f6342d3695a07e331] <==
	* I0601 11:24:30.176959       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0601 11:26:12.676418       1 handler_proxy.go:104] no RequestInfo found in the context
	E0601 11:26:12.676503       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0601 11:26:12.676518       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0601 11:27:12.676989       1 handler_proxy.go:104] no RequestInfo found in the context
	E0601 11:27:12.677077       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0601 11:27:12.677085       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0601 11:29:12.677245       1 handler_proxy.go:104] no RequestInfo found in the context
	E0601 11:29:12.677320       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0601 11:29:12.677331       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0601 11:31:12.682124       1 handler_proxy.go:104] no RequestInfo found in the context
	E0601 11:31:12.682207       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0601 11:31:12.682222       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0601 11:32:12.682914       1 handler_proxy.go:104] no RequestInfo found in the context
	E0601 11:32:12.682996       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0601 11:32:12.683011       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0601 11:34:12.683903       1 handler_proxy.go:104] no RequestInfo found in the context
	E0601 11:34:12.683995       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0601 11:34:12.684015       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	* 
	* ==> kube-controller-manager [419ab1e52af79f5d31cce5a9b20223a30a371546b4870858a9ea585daadb8873] <==
	* W0601 11:28:27.474216       1 garbagecollector.go:707] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0601 11:28:57.053359       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0601 11:28:57.491272       1 garbagecollector.go:707] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0601 11:29:27.063771       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0601 11:29:27.505806       1 garbagecollector.go:707] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0601 11:29:57.073622       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0601 11:29:57.520107       1 garbagecollector.go:707] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0601 11:30:27.084062       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0601 11:30:27.535564       1 garbagecollector.go:707] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0601 11:30:57.093067       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0601 11:30:57.552812       1 garbagecollector.go:707] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0601 11:31:27.102344       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0601 11:31:27.567738       1 garbagecollector.go:707] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0601 11:31:57.113425       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0601 11:31:57.583033       1 garbagecollector.go:707] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0601 11:32:27.129285       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0601 11:32:27.597352       1 garbagecollector.go:707] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0601 11:32:57.147824       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0601 11:32:57.609743       1 garbagecollector.go:707] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0601 11:33:27.160073       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0601 11:33:27.626739       1 garbagecollector.go:707] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0601 11:33:57.183274       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0601 11:33:57.640693       1 garbagecollector.go:707] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0601 11:34:27.195317       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0601 11:34:27.654581       1 garbagecollector.go:707] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	
	* 
	* ==> kube-proxy [2024cc29941eac56c99e05d765da4cccd7a64faa03b756a89fe50b23fa6e8a56] <==
	* I0601 11:21:27.908559       1 node.go:163] Successfully retrieved node IP: 192.168.76.2
	I0601 11:21:27.908665       1 server_others.go:138] "Detected node IP" address="192.168.76.2"
	I0601 11:21:27.908723       1 server_others.go:561] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0601 11:21:27.929530       1 server_others.go:206] "Using iptables Proxier"
	I0601 11:21:27.929554       1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0601 11:21:27.929561       1 server_others.go:214] "Creating dualStackProxier for iptables"
	I0601 11:21:27.929580       1 server_others.go:491] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
	I0601 11:21:27.929998       1 server.go:656] "Version info" version="v1.23.6"
	I0601 11:21:27.930587       1 config.go:226] "Starting endpoint slice config controller"
	I0601 11:21:27.930621       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0601 11:21:27.930645       1 config.go:317] "Starting service config controller"
	I0601 11:21:27.930649       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0601 11:21:28.031260       1 shared_informer.go:247] Caches are synced for service config 
	I0601 11:21:28.031267       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	
	* 
	* ==> kube-scheduler [66ae64154eec2d7b3c29d2dfeddf5ba2852497cdce5a0c800571ccb6a8d41a89] <==
	* W0601 11:21:11.758158       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0601 11:21:11.758176       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0601 11:21:11.758158       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0601 11:21:11.758197       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0601 11:21:11.758221       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0601 11:21:11.758233       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0601 11:21:11.758422       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0601 11:21:11.758446       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0601 11:21:11.758510       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0601 11:21:11.758527       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0601 11:21:11.758633       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0601 11:21:11.758649       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0601 11:21:12.596556       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0601 11:21:12.596630       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0601 11:21:12.605956       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0601 11:21:12.606023       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0601 11:21:12.628052       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0601 11:21:12.628092       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0601 11:21:12.632884       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0601 11:21:12.632924       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0601 11:21:12.665044       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0601 11:21:12.665106       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0601 11:21:12.777999       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0601 11:21:12.778030       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0601 11:21:13.283228       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Wed 2022-06-01 11:16:28 UTC, end at Wed 2022-06-01 11:34:32 UTC. --
	Jun 01 11:33:03 embed-certs-20220601110327-6708 kubelet[2883]: I0601 11:33:03.762492    2883 scope.go:110] "RemoveContainer" containerID="b9c21a59dc97a27125389b613840a7ce232fbc8f3c2d532115d660b405fc2761"
	Jun 01 11:33:03 embed-certs-20220601110327-6708 kubelet[2883]: E0601 11:33:03.762919    2883 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kindnet-cni pod=kindnet-xnhg5_kube-system(77485095-9b6b-4682-b7c7-f5a313137d9f)\"" pod="kube-system/kindnet-xnhg5" podUID=77485095-9b6b-4682-b7c7-f5a313137d9f
	Jun 01 11:33:05 embed-certs-20220601110327-6708 kubelet[2883]: E0601 11:33:05.035380    2883 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jun 01 11:33:10 embed-certs-20220601110327-6708 kubelet[2883]: E0601 11:33:10.036273    2883 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jun 01 11:33:14 embed-certs-20220601110327-6708 kubelet[2883]: I0601 11:33:14.762622    2883 scope.go:110] "RemoveContainer" containerID="b9c21a59dc97a27125389b613840a7ce232fbc8f3c2d532115d660b405fc2761"
	Jun 01 11:33:14 embed-certs-20220601110327-6708 kubelet[2883]: E0601 11:33:14.762931    2883 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kindnet-cni pod=kindnet-xnhg5_kube-system(77485095-9b6b-4682-b7c7-f5a313137d9f)\"" pod="kube-system/kindnet-xnhg5" podUID=77485095-9b6b-4682-b7c7-f5a313137d9f
	Jun 01 11:33:15 embed-certs-20220601110327-6708 kubelet[2883]: E0601 11:33:15.037053    2883 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jun 01 11:33:20 embed-certs-20220601110327-6708 kubelet[2883]: E0601 11:33:20.037953    2883 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jun 01 11:33:25 embed-certs-20220601110327-6708 kubelet[2883]: E0601 11:33:25.039304    2883 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jun 01 11:33:26 embed-certs-20220601110327-6708 kubelet[2883]: I0601 11:33:26.762434    2883 scope.go:110] "RemoveContainer" containerID="b9c21a59dc97a27125389b613840a7ce232fbc8f3c2d532115d660b405fc2761"
	Jun 01 11:33:26 embed-certs-20220601110327-6708 kubelet[2883]: E0601 11:33:26.762855    2883 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kindnet-cni pod=kindnet-xnhg5_kube-system(77485095-9b6b-4682-b7c7-f5a313137d9f)\"" pod="kube-system/kindnet-xnhg5" podUID=77485095-9b6b-4682-b7c7-f5a313137d9f
	Jun 01 11:33:30 embed-certs-20220601110327-6708 kubelet[2883]: E0601 11:33:30.040980    2883 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jun 01 11:33:35 embed-certs-20220601110327-6708 kubelet[2883]: E0601 11:33:35.042237    2883 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jun 01 11:33:40 embed-certs-20220601110327-6708 kubelet[2883]: E0601 11:33:40.043813    2883 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jun 01 11:33:40 embed-certs-20220601110327-6708 kubelet[2883]: I0601 11:33:40.762478    2883 scope.go:110] "RemoveContainer" containerID="b9c21a59dc97a27125389b613840a7ce232fbc8f3c2d532115d660b405fc2761"
	Jun 01 11:33:45 embed-certs-20220601110327-6708 kubelet[2883]: E0601 11:33:45.045155    2883 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jun 01 11:33:50 embed-certs-20220601110327-6708 kubelet[2883]: E0601 11:33:50.045797    2883 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jun 01 11:33:55 embed-certs-20220601110327-6708 kubelet[2883]: E0601 11:33:55.046979    2883 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jun 01 11:34:00 embed-certs-20220601110327-6708 kubelet[2883]: E0601 11:34:00.047970    2883 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jun 01 11:34:05 embed-certs-20220601110327-6708 kubelet[2883]: E0601 11:34:05.049617    2883 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jun 01 11:34:10 embed-certs-20220601110327-6708 kubelet[2883]: E0601 11:34:10.050799    2883 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jun 01 11:34:15 embed-certs-20220601110327-6708 kubelet[2883]: E0601 11:34:15.051544    2883 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jun 01 11:34:20 embed-certs-20220601110327-6708 kubelet[2883]: E0601 11:34:20.052632    2883 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jun 01 11:34:25 embed-certs-20220601110327-6708 kubelet[2883]: E0601 11:34:25.053523    2883 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jun 01 11:34:30 embed-certs-20220601110327-6708 kubelet[2883]: E0601 11:34:30.054922    2883 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-20220601110327-6708 -n embed-certs-20220601110327-6708
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-20220601110327-6708 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:270: non-running pods: coredns-64897985d-jsmdw metrics-server-b955d9d8-rw5ds storage-provisioner dashboard-metrics-scraper-56974995fc-xlrsl kubernetes-dashboard-8469778f77-q4zvb
helpers_test.go:272: ======> post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:275: (dbg) Run:  kubectl --context embed-certs-20220601110327-6708 describe pod coredns-64897985d-jsmdw metrics-server-b955d9d8-rw5ds storage-provisioner dashboard-metrics-scraper-56974995fc-xlrsl kubernetes-dashboard-8469778f77-q4zvb
helpers_test.go:275: (dbg) Non-zero exit: kubectl --context embed-certs-20220601110327-6708 describe pod coredns-64897985d-jsmdw metrics-server-b955d9d8-rw5ds storage-provisioner dashboard-metrics-scraper-56974995fc-xlrsl kubernetes-dashboard-8469778f77-q4zvb: exit status 1 (56.864571ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-64897985d-jsmdw" not found
	Error from server (NotFound): pods "metrics-server-b955d9d8-rw5ds" not found
	Error from server (NotFound): pods "storage-provisioner" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-56974995fc-xlrsl" not found
	Error from server (NotFound): pods "kubernetes-dashboard-8469778f77-q4zvb" not found

                                                
                                                
** /stderr **
helpers_test.go:277: kubectl --context embed-certs-20220601110327-6708 describe pod coredns-64897985d-jsmdw metrics-server-b955d9d8-rw5ds storage-provisioner dashboard-metrics-scraper-56974995fc-xlrsl kubernetes-dashboard-8469778f77-q4zvb: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (542.41s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop (542.34s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:276: (dbg) TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:342: "kubernetes-dashboard-8469778f77-k8wsb" [4b138c9c-34b9-4f97-a3bb-276249e784f5] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
E0601 11:37:12.928514    6708 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/addons-20220601102024-6708/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
E0601 11:37:21.870382    6708 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/functional-20220601102456-6708/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
E0601 11:37:25.080528    6708 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/cilium-20220601104839-6708/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
E0601 11:37:54.651974    6708 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/enable-default-cni-20220601104837-6708/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:276: ***** TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: timed out waiting for the condition ****
start_stop_delete_test.go:276: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-different-port-20220601110654-6708 -n default-k8s-different-port-20220601110654-6708
start_stop_delete_test.go:276: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2022-06-01 11:37:56.506753992 +0000 UTC m=+4677.506374235
start_stop_delete_test.go:276: (dbg) Run:  kubectl --context default-k8s-different-port-20220601110654-6708 describe po kubernetes-dashboard-8469778f77-k8wsb -n kubernetes-dashboard
start_stop_delete_test.go:276: (dbg) Non-zero exit: kubectl --context default-k8s-different-port-20220601110654-6708 describe po kubernetes-dashboard-8469778f77-k8wsb -n kubernetes-dashboard: context deadline exceeded (1.675µs)
start_stop_delete_test.go:276: kubectl --context default-k8s-different-port-20220601110654-6708 describe po kubernetes-dashboard-8469778f77-k8wsb -n kubernetes-dashboard: context deadline exceeded
start_stop_delete_test.go:276: (dbg) Run:  kubectl --context default-k8s-different-port-20220601110654-6708 logs kubernetes-dashboard-8469778f77-k8wsb -n kubernetes-dashboard
start_stop_delete_test.go:276: (dbg) Non-zero exit: kubectl --context default-k8s-different-port-20220601110654-6708 logs kubernetes-dashboard-8469778f77-k8wsb -n kubernetes-dashboard: context deadline exceeded (209ns)
start_stop_delete_test.go:276: kubectl --context default-k8s-different-port-20220601110654-6708 logs kubernetes-dashboard-8469778f77-k8wsb -n kubernetes-dashboard: context deadline exceeded
start_stop_delete_test.go:277: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: timed out waiting for the condition
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect default-k8s-different-port-20220601110654-6708
helpers_test.go:235: (dbg) docker inspect default-k8s-different-port-20220601110654-6708:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "dccf9935a74c10fb8fa207fbc849bba86fe9f8dff98b2051cc49fbfa90e4ec8b",
	        "Created": "2022-06-01T11:07:03.290503902Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 276959,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-06-01T11:19:53.572720887Z",
	            "FinishedAt": "2022-06-01T11:19:52.302658787Z"
	        },
	        "Image": "sha256:5fc9565d342f677dd8987c0c7656d8d58147ab45c932c9076935b38a770e4cf7",
	        "ResolvConfPath": "/var/lib/docker/containers/dccf9935a74c10fb8fa207fbc849bba86fe9f8dff98b2051cc49fbfa90e4ec8b/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/dccf9935a74c10fb8fa207fbc849bba86fe9f8dff98b2051cc49fbfa90e4ec8b/hostname",
	        "HostsPath": "/var/lib/docker/containers/dccf9935a74c10fb8fa207fbc849bba86fe9f8dff98b2051cc49fbfa90e4ec8b/hosts",
	        "LogPath": "/var/lib/docker/containers/dccf9935a74c10fb8fa207fbc849bba86fe9f8dff98b2051cc49fbfa90e4ec8b/dccf9935a74c10fb8fa207fbc849bba86fe9f8dff98b2051cc49fbfa90e4ec8b-json.log",
	        "Name": "/default-k8s-different-port-20220601110654-6708",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-different-port-20220601110654-6708:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default-k8s-different-port-20220601110654-6708",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/ec30b29d13ab7f3810bf40e3fa416d096637b34a6f7b5a750bd7d391c0a4008e-init/diff:/var/lib/docker/overlay2/b8d8db1a2aa7e68bc30cc8784920412f67de75d287f26f41df14fe77e9cd01aa/diff:/var/lib/docker/overlay2/e05914bc392f34941f075ddf1f08984ba653e41980a3a12b3f0ec7bc66faed2c/diff:/var/lib/docker/overlay2/311290be1e68cfaf248422398f2ee037b662e83e8a80c393cf1f35af46c871e3/diff:/var/lib/docker/overlay2/df5bd86ed0e0c9740e1a389f0dd435837b730043ff123ee198aa28f82511f21c/diff:/var/lib/docker/overlay2/52ebf0baed5a1f4378d84fd6b94c9c1756fef7c7837d0766f77ef29b2104f39c/diff:/var/lib/docker/overlay2/19fdc64f1948c51309fb85761a84e7787a06f91f43e3d554c0507c5291bda802/diff:/var/lib/docker/overlay2/8fbe94a5f5ec7e7b87e3294d917aaba04c0d5a1072a4876ca3171f7b416e78d1/diff:/var/lib/docker/overlay2/3f2aec9d3b6e64c9251bd84e36639908a944a57741768984d5a66f5f34ed2288/diff:/var/lib/docker/overlay2/f4b59ae0a52d88a0325a281689c43f769408e14f70b9c94b771e70af140d7538/diff:/var/lib/docker/overlay2/1b9610
0ef86fc38fba7ce796c89f6a0f0c16c7fc1a94ff1a7f2021a01dd5471e/diff:/var/lib/docker/overlay2/10388fac28961ac0e67628b98602c8c77b04d12b21cd25a8d9adc05c1261252b/diff:/var/lib/docker/overlay2/efcc44ba0e0e6acd23e74db49106211785c2519f22b858252b43b17e927095e4/diff:/var/lib/docker/overlay2/3dbc9b9ec4e689c631fadb88ec67ad1f44f1059642f0d9e9740fa4523681476a/diff:/var/lib/docker/overlay2/196519dbdfaceb51fe77bd0517b4a726c63133e2f1a44cc71539d859f920996f/diff:/var/lib/docker/overlay2/bf269014ddb2ded291bc1f7a299a168567e8ffb016d7d4ba5ad3681eb1c988ba/diff:/var/lib/docker/overlay2/dc3a1c86dd14e2cba77aa4a9424d61aa34efc817c2c14b8f79d66fb1c19132ea/diff:/var/lib/docker/overlay2/7151cfa81a30a8e2bacf0e7e97eb21e825844e9ee99d4d774c443cf4df3042bf/diff:/var/lib/docker/overlay2/4827c7b9079e215b353e51dd539d7e28bf73341ea0a494df4654e9fd1b53d16c/diff:/var/lib/docker/overlay2/2da0eda04306aaeacd19a5cc85b885230b88e74f1791bdb022ebf4b39d85fae2/diff:/var/lib/docker/overlay2/1a0bdf8971fb0a44ff0e168623dbbb2d84b8e93f6d20d9351efd68470e4d4851/diff:/var/lib/d
ocker/overlay2/9ced6f582fc4ce00064f1f5e6ac922d838648fe94afc8e015c309e04004f10ca/diff:/var/lib/docker/overlay2/dd6d4c3166eb565aff2516953d5a8877a204214f2436d414475132ae70429cf7/diff:/var/lib/docker/overlay2/a1ace060e85891d54b26ff4be9fcbce36ffbb15cc4061eb4ccf0add8f82783df/diff:/var/lib/docker/overlay2/bc8b93bfba93e7da2c573ae8b6405ebff526153f6a8b0659aebaf044dc7e8f43/diff:/var/lib/docker/overlay2/c6292624b658b5761ddc277e4b50f1bd9d32fb9a2ad4a01647d6482fa0d29eb3/diff:/var/lib/docker/overlay2/cfe8f35eeb0a80a3c747eac0dfd9195da1fa3d9f92f0b7866d46f3517f3b10ee/diff:/var/lib/docker/overlay2/90bc3b9378b5761de7575f1a82d48e4b4ebf50af153eafa5a565585e136b87f8/diff:/var/lib/docker/overlay2/e1b928a2483870df7fdf4adb3b4002f9effe1db7fbff925b24005f47290b2915/diff:/var/lib/docker/overlay2/4758c75ab63fd3f43ae7b654bc93566ded69acea0e92caf3043ef6eeeec9ca1b/diff:/var/lib/docker/overlay2/8558da5809877030d87982052e5011fb04b8bb9646d0f3c1d4aa10f2d7926592/diff:/var/lib/docker/overlay2/f6400cfae811f55736f55064c8949e18ac2dc1175a5149bb0382a79fa92
4f3f3/diff:/var/lib/docker/overlay2/875d5278ff8445b84261d4586c6a14fbbd9c13beff1fe9252591162e4d91a0dc/diff:/var/lib/docker/overlay2/12d9f85a229e1386e37fb609020fdcb5535c67ce811012fd646182559e4ee754/diff:/var/lib/docker/overlay2/d5c5b85272d7a8b0b62da488c8cba8d883a0adcfc9f1b2f6ad2f856b4f13e5f7/diff:/var/lib/docker/overlay2/5f6feb9e059c22491e287804f38f097fda984dc9376ba28ae810e13dcf27394f/diff:/var/lib/docker/overlay2/113a715b1135f09b959295e960aeaa36846ad54a6fe46fdd53d061bc3fe114e3/diff:/var/lib/docker/overlay2/265096a57a98130b8073aa41d0a22f0ada5a391943e490ac1c281a634e22cba0/diff:/var/lib/docker/overlay2/15f57e6dc9a6b382240a9335aae067454048b2eb6d0094f2d3c8c115be34311a/diff:/var/lib/docker/overlay2/45ca8d44d46a41b4493f52c0048d08b8d5ff4c1b28233ab8940f6422df1f1273/diff:/var/lib/docker/overlay2/d555005a13c251ef928389a1b06d251a17378cf3ec68af5a4d6c849412d3f69f/diff:/var/lib/docker/overlay2/80b8c06850dacfe2ca4db4e0fa4d2d0dd6997cf495a7f7da9b95a996694144c5/diff",
	                "MergedDir": "/var/lib/docker/overlay2/ec30b29d13ab7f3810bf40e3fa416d096637b34a6f7b5a750bd7d391c0a4008e/merged",
	                "UpperDir": "/var/lib/docker/overlay2/ec30b29d13ab7f3810bf40e3fa416d096637b34a6f7b5a750bd7d391c0a4008e/diff",
	                "WorkDir": "/var/lib/docker/overlay2/ec30b29d13ab7f3810bf40e3fa416d096637b34a6f7b5a750bd7d391c0a4008e/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-different-port-20220601110654-6708",
	                "Source": "/var/lib/docker/volumes/default-k8s-different-port-20220601110654-6708/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-different-port-20220601110654-6708",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-different-port-20220601110654-6708",
	                "name.minikube.sigs.k8s.io": "default-k8s-different-port-20220601110654-6708",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "627aaaeeaa419894172d2929261a1bd95129c59503b90707762ab0b61d080e8a",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49442"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49441"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49438"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49440"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49439"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/627aaaeeaa41",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-different-port-20220601110654-6708": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "dccf9935a74c",
	                        "default-k8s-different-port-20220601110654-6708"
	                    ],
	                    "NetworkID": "7d52ef0dc0855b59c05da2e66b25f4d0866ad1d653be1fa615e193dd86443771",
	                    "EndpointID": "6107b065ae8c8c99ec32f0643fe4776fd7bfb23a42439002519244e27fe4c287",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-different-port-20220601110654-6708 -n default-k8s-different-port-20220601110654-6708
helpers_test.go:244: <<< TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-different-port-20220601110654-6708 logs -n 25
helpers_test.go:252: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------------------------------|------------------------------------------------|---------|----------------|---------------------|---------------------|
	| Command |                            Args                            |                    Profile                     |  User   |    Version     |     Start Time      |      End Time       |
	|---------|------------------------------------------------------------|------------------------------------------------|---------|----------------|---------------------|---------------------|
	| stop    | -p                                                         | newest-cni-20220601111420-6708                 | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:15 UTC | 01 Jun 22 11:15 UTC |
	|         | newest-cni-20220601111420-6708                             |                                                |         |                |                     |                     |
	|         | --alsologtostderr -v=3                                     |                                                |         |                |                     |                     |
	| addons  | enable dashboard -p                                        | newest-cni-20220601111420-6708                 | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:15 UTC | 01 Jun 22 11:15 UTC |
	|         | newest-cni-20220601111420-6708                             |                                                |         |                |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                |         |                |                     |                     |
	| start   | -p newest-cni-20220601111420-6708 --memory=2200            | newest-cni-20220601111420-6708                 | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:15 UTC | 01 Jun 22 11:15 UTC |
	|         | --alsologtostderr --wait=apiserver,system_pods,default_sa  |                                                |         |                |                     |                     |
	|         | --feature-gates ServerSideApply=true --network-plugin=cni  |                                                |         |                |                     |                     |
	|         | --extra-config=kubelet.network-plugin=cni                  |                                                |         |                |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |                                                |         |                |                     |                     |
	|         | --driver=docker  --container-runtime=containerd            |                                                |         |                |                     |                     |
	|         | --kubernetes-version=v1.23.6                               |                                                |         |                |                     |                     |
	| ssh     | -p                                                         | newest-cni-20220601111420-6708                 | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:15 UTC | 01 Jun 22 11:15 UTC |
	|         | newest-cni-20220601111420-6708                             |                                                |         |                |                     |                     |
	|         | sudo crictl images -o json                                 |                                                |         |                |                     |                     |
	| pause   | -p                                                         | newest-cni-20220601111420-6708                 | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:15 UTC | 01 Jun 22 11:15 UTC |
	|         | newest-cni-20220601111420-6708                             |                                                |         |                |                     |                     |
	|         | --alsologtostderr -v=1                                     |                                                |         |                |                     |                     |
	| unpause | -p                                                         | newest-cni-20220601111420-6708                 | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:15 UTC | 01 Jun 22 11:16 UTC |
	|         | newest-cni-20220601111420-6708                             |                                                |         |                |                     |                     |
	|         | --alsologtostderr -v=1                                     |                                                |         |                |                     |                     |
	| delete  | -p                                                         | newest-cni-20220601111420-6708                 | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:16 UTC | 01 Jun 22 11:16 UTC |
	|         | newest-cni-20220601111420-6708                             |                                                |         |                |                     |                     |
	| delete  | -p                                                         | newest-cni-20220601111420-6708                 | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:16 UTC | 01 Jun 22 11:16 UTC |
	|         | newest-cni-20220601111420-6708                             |                                                |         |                |                     |                     |
	| logs    | embed-certs-20220601110327-6708                            | embed-certs-20220601110327-6708                | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:16 UTC | 01 Jun 22 11:16 UTC |
	|         | logs -n 25                                                 |                                                |         |                |                     |                     |
	| logs    | embed-certs-20220601110327-6708                            | embed-certs-20220601110327-6708                | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:16 UTC | 01 Jun 22 11:16 UTC |
	|         | logs -n 25                                                 |                                                |         |                |                     |                     |
	| addons  | enable metrics-server -p                                   | embed-certs-20220601110327-6708                | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:16 UTC | 01 Jun 22 11:16 UTC |
	|         | embed-certs-20220601110327-6708                            |                                                |         |                |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                                                |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                     |                                                |         |                |                     |                     |
	| stop    | -p                                                         | embed-certs-20220601110327-6708                | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:16 UTC | 01 Jun 22 11:16 UTC |
	|         | embed-certs-20220601110327-6708                            |                                                |         |                |                     |                     |
	|         | --alsologtostderr -v=3                                     |                                                |         |                |                     |                     |
	| addons  | enable dashboard -p                                        | embed-certs-20220601110327-6708                | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:16 UTC | 01 Jun 22 11:16 UTC |
	|         | embed-certs-20220601110327-6708                            |                                                |         |                |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                |         |                |                     |                     |
	| logs    | default-k8s-different-port-20220601110654-6708             | default-k8s-different-port-20220601110654-6708 | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:19 UTC | 01 Jun 22 11:19 UTC |
	|         | logs -n 25                                                 |                                                |         |                |                     |                     |
	| logs    | default-k8s-different-port-20220601110654-6708             | default-k8s-different-port-20220601110654-6708 | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:19 UTC | 01 Jun 22 11:19 UTC |
	|         | logs -n 25                                                 |                                                |         |                |                     |                     |
	| addons  | enable metrics-server -p                                   | default-k8s-different-port-20220601110654-6708 | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:19 UTC | 01 Jun 22 11:19 UTC |
	|         | default-k8s-different-port-20220601110654-6708             |                                                |         |                |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                                                |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                     |                                                |         |                |                     |                     |
	| stop    | -p                                                         | default-k8s-different-port-20220601110654-6708 | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:19 UTC | 01 Jun 22 11:19 UTC |
	|         | default-k8s-different-port-20220601110654-6708             |                                                |         |                |                     |                     |
	|         | --alsologtostderr -v=3                                     |                                                |         |                |                     |                     |
	| addons  | enable dashboard -p                                        | default-k8s-different-port-20220601110654-6708 | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:19 UTC | 01 Jun 22 11:19 UTC |
	|         | default-k8s-different-port-20220601110654-6708             |                                                |         |                |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                |         |                |                     |                     |
	| logs    | old-k8s-version-20220601105850-6708                        | old-k8s-version-20220601105850-6708            | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:21 UTC | 01 Jun 22 11:21 UTC |
	|         | logs -n 25                                                 |                                                |         |                |                     |                     |
	| logs    | embed-certs-20220601110327-6708                            | embed-certs-20220601110327-6708                | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:25 UTC | 01 Jun 22 11:25 UTC |
	|         | logs -n 25                                                 |                                                |         |                |                     |                     |
	| logs    | default-k8s-different-port-20220601110654-6708             | default-k8s-different-port-20220601110654-6708 | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:28 UTC | 01 Jun 22 11:28 UTC |
	|         | logs -n 25                                                 |                                                |         |                |                     |                     |
	| logs    | old-k8s-version-20220601105850-6708                        | old-k8s-version-20220601105850-6708            | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:30 UTC | 01 Jun 22 11:30 UTC |
	|         | logs -n 25                                                 |                                                |         |                |                     |                     |
	| delete  | -p                                                         | old-k8s-version-20220601105850-6708            | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:30 UTC | 01 Jun 22 11:30 UTC |
	|         | old-k8s-version-20220601105850-6708                        |                                                |         |                |                     |                     |
	| logs    | embed-certs-20220601110327-6708                            | embed-certs-20220601110327-6708                | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:34 UTC | 01 Jun 22 11:34 UTC |
	|         | logs -n 25                                                 |                                                |         |                |                     |                     |
	| delete  | -p                                                         | embed-certs-20220601110327-6708                | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:34 UTC | 01 Jun 22 11:34 UTC |
	|         | embed-certs-20220601110327-6708                            |                                                |         |                |                     |                     |
	|---------|------------------------------------------------------------|------------------------------------------------|---------|----------------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/06/01 11:19:52
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.18.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0601 11:19:52.827023  276679 out.go:296] Setting OutFile to fd 1 ...
	I0601 11:19:52.827225  276679 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0601 11:19:52.827237  276679 out.go:309] Setting ErrFile to fd 2...
	I0601 11:19:52.827242  276679 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0601 11:19:52.827359  276679 root.go:322] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/bin
	I0601 11:19:52.827588  276679 out.go:303] Setting JSON to false
	I0601 11:19:52.828890  276679 start.go:115] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":3747,"bootTime":1654078646,"procs":456,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.13.0-1027-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0601 11:19:52.828955  276679 start.go:125] virtualization: kvm guest
	I0601 11:19:52.831944  276679 out.go:177] * [default-k8s-different-port-20220601110654-6708] minikube v1.26.0-beta.1 on Ubuntu 20.04 (kvm/amd64)
	I0601 11:19:52.833439  276679 out.go:177]   - MINIKUBE_LOCATION=14079
	I0601 11:19:52.833372  276679 notify.go:193] Checking for updates...
	I0601 11:19:52.835007  276679 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0601 11:19:52.836578  276679 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig
	I0601 11:19:52.837966  276679 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube
	I0601 11:19:52.839440  276679 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0601 11:19:52.841215  276679 config.go:178] Loaded profile config "default-k8s-different-port-20220601110654-6708": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.6
	I0601 11:19:52.841578  276679 driver.go:358] Setting default libvirt URI to qemu:///system
	I0601 11:19:52.880823  276679 docker.go:137] docker version: linux-20.10.16
	I0601 11:19:52.880897  276679 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0601 11:19:52.978177  276679 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:76 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:44 OomKillDisable:true NGoroutines:44 SystemTime:2022-06-01 11:19:52.908721136 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1027-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662795776 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:20.10.16 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0601 11:19:52.978275  276679 docker.go:254] overlay module found
	I0601 11:19:52.981078  276679 out.go:177] * Using the docker driver based on existing profile
	I0601 11:19:52.982316  276679 start.go:284] selected driver: docker
	I0601 11:19:52.982326  276679 start.go:806] validating driver "docker" against &{Name:default-k8s-different-port-20220601110654-6708 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:default-k8s-different-port-
20220601110654-6708 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8444 KubernetesVersion:v1.23.6 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.5.1@sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true
system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0601 11:19:52.982412  276679 start.go:817] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0601 11:19:52.983242  276679 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0601 11:19:53.085320  276679 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:76 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:44 OomKillDisable:true NGoroutines:44 SystemTime:2022-06-01 11:19:53.012439643 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1027-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662795776 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:20.10.16 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0601 11:19:53.085561  276679 start_flags.go:847] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0601 11:19:53.085581  276679 cni.go:95] Creating CNI manager for ""
	I0601 11:19:53.085589  276679 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0601 11:19:53.085608  276679 start_flags.go:306] config:
	{Name:default-k8s-different-port-20220601110654-6708 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:default-k8s-different-port-20220601110654-6708 Namespace:default APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8444 KubernetesVersion:v1.23.6 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.5.1@sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] Lis
tenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0601 11:19:53.088575  276679 out.go:177] * Starting control plane node default-k8s-different-port-20220601110654-6708 in cluster default-k8s-different-port-20220601110654-6708
	I0601 11:19:53.089964  276679 cache.go:120] Beginning downloading kic base image for docker with containerd
	I0601 11:19:53.091501  276679 out.go:177] * Pulling base image ...
	I0601 11:19:53.092800  276679 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime containerd
	I0601 11:19:53.092839  276679 preload.go:148] Found local preload: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.6-containerd-overlay2-amd64.tar.lz4
	I0601 11:19:53.092856  276679 cache.go:57] Caching tarball of preloaded images
	I0601 11:19:53.092897  276679 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a in local docker daemon
	I0601 11:19:53.093061  276679 preload.go:174] Found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.6-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0601 11:19:53.093076  276679 cache.go:60] Finished verifying existence of preloaded tar for  v1.23.6 on containerd
	I0601 11:19:53.093182  276679 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/default-k8s-different-port-20220601110654-6708/config.json ...
	I0601 11:19:53.136384  276679 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a in local docker daemon, skipping pull
	I0601 11:19:53.136410  276679 cache.go:141] gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a exists in daemon, skipping load
	I0601 11:19:53.136424  276679 cache.go:206] Successfully downloaded all kic artifacts
	I0601 11:19:53.136454  276679 start.go:352] acquiring machines lock for default-k8s-different-port-20220601110654-6708: {Name:mk7500f636009412c286b3a5b3a2182fb6b229b2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0601 11:19:53.136550  276679 start.go:356] acquired machines lock for "default-k8s-different-port-20220601110654-6708" in 69.025µs
	I0601 11:19:53.136570  276679 start.go:94] Skipping create...Using existing machine configuration
	I0601 11:19:53.136577  276679 fix.go:55] fixHost starting: 
	I0601 11:19:53.137208  276679 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220601110654-6708 --format={{.State.Status}}
	I0601 11:19:53.168642  276679 fix.go:103] recreateIfNeeded on default-k8s-different-port-20220601110654-6708: state=Stopped err=<nil>
	W0601 11:19:53.168681  276679 fix.go:129] unexpected machine state, will restart: <nil>
	I0601 11:19:53.170972  276679 out.go:177] * Restarting existing docker container for "default-k8s-different-port-20220601110654-6708" ...
	I0601 11:19:50.719789  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:19:53.220276  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:19:53.243194  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:19:55.243470  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:19:53.172500  276679 cli_runner.go:164] Run: docker start default-k8s-different-port-20220601110654-6708
	I0601 11:19:53.580842  276679 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220601110654-6708 --format={{.State.Status}}
	I0601 11:19:53.615796  276679 kic.go:416] container "default-k8s-different-port-20220601110654-6708" state is running.
	I0601 11:19:53.616193  276679 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-different-port-20220601110654-6708
	I0601 11:19:53.647308  276679 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/default-k8s-different-port-20220601110654-6708/config.json ...
	I0601 11:19:53.647503  276679 machine.go:88] provisioning docker machine ...
	I0601 11:19:53.647526  276679 ubuntu.go:169] provisioning hostname "default-k8s-different-port-20220601110654-6708"
	I0601 11:19:53.647560  276679 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220601110654-6708
	I0601 11:19:53.679842  276679 main.go:134] libmachine: Using SSH client type: native
	I0601 11:19:53.680106  276679 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7da240] 0x7dd2a0 <nil>  [] 0s} 127.0.0.1 49442 <nil> <nil>}
	I0601 11:19:53.680131  276679 main.go:134] libmachine: About to run SSH command:
	sudo hostname default-k8s-different-port-20220601110654-6708 && echo "default-k8s-different-port-20220601110654-6708" | sudo tee /etc/hostname
	I0601 11:19:53.680742  276679 main.go:134] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:55946->127.0.0.1:49442: read: connection reset by peer
	I0601 11:19:56.807880  276679 main.go:134] libmachine: SSH cmd err, output: <nil>: default-k8s-different-port-20220601110654-6708
	
	I0601 11:19:56.807951  276679 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220601110654-6708
	I0601 11:19:56.839321  276679 main.go:134] libmachine: Using SSH client type: native
	I0601 11:19:56.839475  276679 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7da240] 0x7dd2a0 <nil>  [] 0s} 127.0.0.1 49442 <nil> <nil>}
	I0601 11:19:56.839510  276679 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-different-port-20220601110654-6708' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-different-port-20220601110654-6708/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-different-port-20220601110654-6708' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0601 11:19:56.951445  276679 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0601 11:19:56.951473  276679 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/c
erts/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube}
	I0601 11:19:56.951491  276679 ubuntu.go:177] setting up certificates
	I0601 11:19:56.951499  276679 provision.go:83] configureAuth start
	I0601 11:19:56.951539  276679 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-different-port-20220601110654-6708
	I0601 11:19:56.982392  276679 provision.go:138] copyHostCerts
	I0601 11:19:56.982451  276679 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.pem, removing ...
	I0601 11:19:56.982464  276679 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.pem
	I0601 11:19:56.982537  276679 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.pem (1082 bytes)
	I0601 11:19:56.982652  276679 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cert.pem, removing ...
	I0601 11:19:56.982664  276679 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cert.pem
	I0601 11:19:56.982697  276679 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cert.pem (1123 bytes)
	I0601 11:19:56.982789  276679 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/key.pem, removing ...
	I0601 11:19:56.982802  276679 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/key.pem
	I0601 11:19:56.982829  276679 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/key.pem (1679 bytes)
	I0601 11:19:56.982876  276679 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca-key.pem org=jenkins.default-k8s-different-port-20220601110654-6708 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube default-k8s-different-port-20220601110654-6708]
	I0601 11:19:57.067574  276679 provision.go:172] copyRemoteCerts
	I0601 11:19:57.067626  276679 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0601 11:19:57.067654  276679 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220601110654-6708
	I0601 11:19:57.098669  276679 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49442 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/default-k8s-different-port-20220601110654-6708/id_rsa Username:docker}
	I0601 11:19:57.182904  276679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0601 11:19:57.199734  276679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/server.pem --> /etc/docker/server.pem (1306 bytes)
	I0601 11:19:57.215838  276679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0601 11:19:57.232284  276679 provision.go:86] duration metric: configureAuth took 280.774927ms
	I0601 11:19:57.232312  276679 ubuntu.go:193] setting minikube options for container-runtime
	I0601 11:19:57.232468  276679 config.go:178] Loaded profile config "default-k8s-different-port-20220601110654-6708": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.6
	I0601 11:19:57.232480  276679 machine.go:91] provisioned docker machine in 3.584963826s
	I0601 11:19:57.232486  276679 start.go:306] post-start starting for "default-k8s-different-port-20220601110654-6708" (driver="docker")
	I0601 11:19:57.232492  276679 start.go:316] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0601 11:19:57.232530  276679 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0601 11:19:57.232572  276679 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220601110654-6708
	I0601 11:19:57.265048  276679 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49442 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/default-k8s-different-port-20220601110654-6708/id_rsa Username:docker}
	I0601 11:19:57.351029  276679 ssh_runner.go:195] Run: cat /etc/os-release
	I0601 11:19:57.353646  276679 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0601 11:19:57.353677  276679 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0601 11:19:57.353687  276679 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0601 11:19:57.353695  276679 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0601 11:19:57.353706  276679 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/addons for local assets ...
	I0601 11:19:57.353765  276679 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files for local assets ...
	I0601 11:19:57.353858  276679 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files/etc/ssl/certs/67082.pem -> 67082.pem in /etc/ssl/certs
	I0601 11:19:57.353951  276679 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0601 11:19:57.360153  276679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files/etc/ssl/certs/67082.pem --> /etc/ssl/certs/67082.pem (1708 bytes)
	I0601 11:19:57.376881  276679 start.go:309] post-start completed in 144.384989ms
	I0601 11:19:57.376932  276679 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0601 11:19:57.376962  276679 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220601110654-6708
	I0601 11:19:57.411118  276679 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49442 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/default-k8s-different-port-20220601110654-6708/id_rsa Username:docker}
	I0601 11:19:57.496188  276679 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0601 11:19:57.499982  276679 fix.go:57] fixHost completed within 4.363400058s
	I0601 11:19:57.500005  276679 start.go:81] releasing machines lock for "default-k8s-different-port-20220601110654-6708", held for 4.363442227s
	I0601 11:19:57.500082  276679 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-different-port-20220601110654-6708
	I0601 11:19:57.532057  276679 ssh_runner.go:195] Run: systemctl --version
	I0601 11:19:57.532107  276679 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220601110654-6708
	I0601 11:19:57.532107  276679 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0601 11:19:57.532168  276679 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220601110654-6708
	I0601 11:19:57.567039  276679 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49442 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/default-k8s-different-port-20220601110654-6708/id_rsa Username:docker}
	I0601 11:19:57.567550  276679 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49442 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/default-k8s-different-port-20220601110654-6708/id_rsa Username:docker}
	I0601 11:19:57.677865  276679 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0601 11:19:57.688848  276679 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0601 11:19:57.697588  276679 docker.go:187] disabling docker service ...
	I0601 11:19:57.697632  276679 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0601 11:19:57.706476  276679 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0601 11:19:57.714826  276679 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0601 11:19:57.791919  276679 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0601 11:19:55.719582  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:19:58.219607  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:19:57.743387  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:20:00.243011  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:19:57.865357  276679 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0601 11:19:57.874183  276679 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0601 11:19:57.886120  276679 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*sandbox_image = .*$|sandbox_image = "k8s.gcr.io/pause:3.6"|' -i /etc/containerd/config.toml"
	I0601 11:19:57.893706  276679 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*restrict_oom_score_adj = .*$|restrict_oom_score_adj = false|' -i /etc/containerd/config.toml"
	I0601 11:19:57.901159  276679 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*SystemdCgroup = .*$|SystemdCgroup = false|' -i /etc/containerd/config.toml"
	I0601 11:19:57.908873  276679 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*conf_dir = .*$|conf_dir = "/etc/cni/net.mk"|' -i /etc/containerd/config.toml"
	I0601 11:19:57.916512  276679 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^# imports|imports = ["/etc/containerd/containerd.conf.d/02-containerd.conf"]|' -i /etc/containerd/config.toml"
	I0601 11:19:57.923712  276679 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc/containerd/containerd.conf.d && printf %!s(MISSING) "dmVyc2lvbiA9IDIK" | base64 -d | sudo tee /etc/containerd/containerd.conf.d/02-containerd.conf"
	I0601 11:19:57.935738  276679 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0601 11:19:57.941802  276679 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0601 11:19:57.947777  276679 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0601 11:19:58.021579  276679 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0601 11:19:58.089337  276679 start.go:447] Will wait 60s for socket path /run/containerd/containerd.sock
	I0601 11:19:58.089424  276679 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0601 11:19:58.092751  276679 start.go:468] Will wait 60s for crictl version
	I0601 11:19:58.092798  276679 ssh_runner.go:195] Run: sudo crictl version
	I0601 11:19:58.116611  276679 retry.go:31] will retry after 11.04660288s: Temporary Error: sudo crictl version: Process exited with status 1
	stdout:
	
	stderr:
	time="2022-06-01T11:19:58Z" level=fatal msg="getting the runtime version: rpc error: code = Unknown desc = server is not initialized yet"
	I0601 11:20:00.719494  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:20:03.219487  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:20:02.243060  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:20:04.243463  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:20:06.244423  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:20:05.719159  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:20:07.719735  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:20:09.163975  276679 ssh_runner.go:195] Run: sudo crictl version
	I0601 11:20:09.186613  276679 start.go:477] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.6.4
	RuntimeApiVersion:  v1alpha2
	I0601 11:20:09.186676  276679 ssh_runner.go:195] Run: containerd --version
	I0601 11:20:09.214385  276679 ssh_runner.go:195] Run: containerd --version
	I0601 11:20:09.243587  276679 out.go:177] * Preparing Kubernetes v1.23.6 on containerd 1.6.4 ...
	I0601 11:20:09.245245  276679 cli_runner.go:164] Run: docker network inspect default-k8s-different-port-20220601110654-6708 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0601 11:20:09.276501  276679 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0601 11:20:09.279800  276679 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0601 11:20:09.290992  276679 out.go:177]   - kubelet.cni-conf-dir=/etc/cni/net.mk
	I0601 11:20:08.742836  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:20:11.242670  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:20:09.292426  276679 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime containerd
	I0601 11:20:09.292493  276679 ssh_runner.go:195] Run: sudo crictl images --output json
	I0601 11:20:09.315170  276679 containerd.go:547] all images are preloaded for containerd runtime.
	I0601 11:20:09.315189  276679 containerd.go:461] Images already preloaded, skipping extraction
	I0601 11:20:09.315224  276679 ssh_runner.go:195] Run: sudo crictl images --output json
	I0601 11:20:09.338119  276679 containerd.go:547] all images are preloaded for containerd runtime.
	I0601 11:20:09.338137  276679 cache_images.go:84] Images are preloaded, skipping loading
	I0601 11:20:09.338184  276679 ssh_runner.go:195] Run: sudo crictl info
	I0601 11:20:09.360773  276679 cni.go:95] Creating CNI manager for ""
	I0601 11:20:09.360799  276679 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0601 11:20:09.360817  276679 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0601 11:20:09.360831  276679 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8444 KubernetesVersion:v1.23.6 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-different-port-20220601110654-6708 NodeName:default-k8s-different-port-20220601110654-6708 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.49.2
CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0601 11:20:09.360999  276679 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "default-k8s-different-port-20220601110654-6708"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.23.6
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0601 11:20:09.361105  276679 kubeadm.go:961] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.23.6/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cni-conf-dir=/etc/cni/net.mk --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=default-k8s-different-port-20220601110654-6708 --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.49.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.23.6 ClusterName:default-k8s-different-port-20220601110654-6708 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:}
	I0601 11:20:09.361162  276679 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.23.6
	I0601 11:20:09.368101  276679 binaries.go:44] Found k8s binaries, skipping transfer
	I0601 11:20:09.368169  276679 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0601 11:20:09.374382  276679 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (591 bytes)
	I0601 11:20:09.386282  276679 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0601 11:20:09.398188  276679 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2075 bytes)
	I0601 11:20:09.409736  276679 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0601 11:20:09.412458  276679 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0601 11:20:09.420789  276679 certs.go:54] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/default-k8s-different-port-20220601110654-6708 for IP: 192.168.49.2
	I0601 11:20:09.420897  276679 certs.go:182] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.key
	I0601 11:20:09.420940  276679 certs.go:182] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/proxy-client-ca.key
	I0601 11:20:09.421000  276679 certs.go:298] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/default-k8s-different-port-20220601110654-6708/client.key
	I0601 11:20:09.421053  276679 certs.go:298] skipping minikube signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/default-k8s-different-port-20220601110654-6708/apiserver.key.dd3b5fb2
	I0601 11:20:09.421088  276679 certs.go:298] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/default-k8s-different-port-20220601110654-6708/proxy-client.key
	I0601 11:20:09.421176  276679 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/6708.pem (1338 bytes)
	W0601 11:20:09.421205  276679 certs.go:384] ignoring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/6708_empty.pem, impossibly tiny 0 bytes
	I0601 11:20:09.421216  276679 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca-key.pem (1679 bytes)
	I0601 11:20:09.421244  276679 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca.pem (1082 bytes)
	I0601 11:20:09.421270  276679 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/cert.pem (1123 bytes)
	I0601 11:20:09.421298  276679 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/key.pem (1679 bytes)
	I0601 11:20:09.421334  276679 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files/etc/ssl/certs/67082.pem (1708 bytes)
	I0601 11:20:09.421917  276679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/default-k8s-different-port-20220601110654-6708/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0601 11:20:09.438490  276679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/default-k8s-different-port-20220601110654-6708/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0601 11:20:09.454711  276679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/default-k8s-different-port-20220601110654-6708/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0601 11:20:09.471469  276679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/default-k8s-different-port-20220601110654-6708/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0601 11:20:09.488271  276679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0601 11:20:09.504375  276679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0601 11:20:09.520473  276679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0601 11:20:09.536663  276679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0601 11:20:09.552725  276679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0601 11:20:09.568724  276679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/6708.pem --> /usr/share/ca-certificates/6708.pem (1338 bytes)
	I0601 11:20:09.584711  276679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files/etc/ssl/certs/67082.pem --> /usr/share/ca-certificates/67082.pem (1708 bytes)
	I0601 11:20:09.600406  276679 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0601 11:20:09.611814  276679 ssh_runner.go:195] Run: openssl version
	I0601 11:20:09.616280  276679 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0601 11:20:09.623058  276679 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0601 11:20:09.625881  276679 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Jun  1 10:20 /usr/share/ca-certificates/minikubeCA.pem
	I0601 11:20:09.625913  276679 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0601 11:20:09.630367  276679 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0601 11:20:09.636712  276679 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6708.pem && ln -fs /usr/share/ca-certificates/6708.pem /etc/ssl/certs/6708.pem"
	I0601 11:20:09.643407  276679 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6708.pem
	I0601 11:20:09.646316  276679 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Jun  1 10:24 /usr/share/ca-certificates/6708.pem
	I0601 11:20:09.646366  276679 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6708.pem
	I0601 11:20:09.650791  276679 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6708.pem /etc/ssl/certs/51391683.0"
	I0601 11:20:09.657126  276679 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/67082.pem && ln -fs /usr/share/ca-certificates/67082.pem /etc/ssl/certs/67082.pem"
	I0601 11:20:09.663990  276679 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/67082.pem
	I0601 11:20:09.666934  276679 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Jun  1 10:24 /usr/share/ca-certificates/67082.pem
	I0601 11:20:09.666966  276679 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/67082.pem
	I0601 11:20:09.671359  276679 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/67082.pem /etc/ssl/certs/3ec20f2e.0"
	I0601 11:20:09.677573  276679 kubeadm.go:395] StartCluster: {Name:default-k8s-different-port-20220601110654-6708 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:default-k8s-different-port-20220601110654-6708
Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8444 KubernetesVersion:v1.23.6 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.5.1@sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] S
tartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0601 11:20:09.677668  276679 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0601 11:20:09.677695  276679 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0601 11:20:09.700805  276679 cri.go:87] found id: "fec5fcb4aee4a111d94d448605edbdddd36bb33e1c2c06fad87be11f5157efdd"
	I0601 11:20:09.700825  276679 cri.go:87] found id: "313035e9674ffbf03bc7f81b4786fb43b0ffff7b5720ab0e88a6bb8f52d6087d"
	I0601 11:20:09.700835  276679 cri.go:87] found id: "f9746f111b56ac0244039e7b095ebf29999ccb20c1bf5e1ba484273bdb9e0d90"
	I0601 11:20:09.700844  276679 cri.go:87] found id: "0b15aeee4f5511a740a4f3f9ebbba9758b6b511b072da616c68a21c118c7790e"
	I0601 11:20:09.700853  276679 cri.go:87] found id: "627fd5c08820c1666f69a50ff3cc02a6eee73709048b46a0243143bf89dde787"
	I0601 11:20:09.700863  276679 cri.go:87] found id: "6ce85ae821e03fdd8bd07541eda8ea822ee62a21dc41c68bec58c5695d43fb44"
	I0601 11:20:09.700870  276679 cri.go:87] found id: ""
	I0601 11:20:09.700900  276679 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0601 11:20:09.711953  276679 cri.go:114] JSON = null
	W0601 11:20:09.711995  276679 kubeadm.go:402] unpause failed: list paused: list returned 0 containers, but ps returned 6
	I0601 11:20:09.712052  276679 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0601 11:20:09.718628  276679 kubeadm.go:410] found existing configuration files, will attempt cluster restart
	I0601 11:20:09.718649  276679 kubeadm.go:626] restartCluster start
	I0601 11:20:09.718687  276679 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0601 11:20:09.724992  276679 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0601 11:20:09.725567  276679 kubeconfig.go:116] verify returned: extract IP: "default-k8s-different-port-20220601110654-6708" does not appear in /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig
	I0601 11:20:09.725941  276679 kubeconfig.go:127] "default-k8s-different-port-20220601110654-6708" context is missing from /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig - will repair!
	I0601 11:20:09.726552  276679 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig: {Name:mkb0e16236e54fbc8651999f1dd70854c53de7db Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0601 11:20:09.727803  276679 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0601 11:20:09.734151  276679 api_server.go:165] Checking apiserver status ...
	I0601 11:20:09.734186  276679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 11:20:09.741699  276679 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 11:20:09.942065  276679 api_server.go:165] Checking apiserver status ...
	I0601 11:20:09.942125  276679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 11:20:09.950479  276679 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 11:20:10.142775  276679 api_server.go:165] Checking apiserver status ...
	I0601 11:20:10.142860  276679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 11:20:10.151184  276679 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 11:20:10.342428  276679 api_server.go:165] Checking apiserver status ...
	I0601 11:20:10.342511  276679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 11:20:10.350942  276679 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 11:20:10.542230  276679 api_server.go:165] Checking apiserver status ...
	I0601 11:20:10.542324  276679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 11:20:10.550731  276679 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 11:20:10.741765  276679 api_server.go:165] Checking apiserver status ...
	I0601 11:20:10.741840  276679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 11:20:10.750184  276679 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 11:20:10.942518  276679 api_server.go:165] Checking apiserver status ...
	I0601 11:20:10.942589  276679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 11:20:10.951137  276679 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 11:20:11.142442  276679 api_server.go:165] Checking apiserver status ...
	I0601 11:20:11.142519  276679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 11:20:11.151332  276679 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 11:20:11.342632  276679 api_server.go:165] Checking apiserver status ...
	I0601 11:20:11.342693  276679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 11:20:11.351149  276679 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 11:20:11.542423  276679 api_server.go:165] Checking apiserver status ...
	I0601 11:20:11.542483  276679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 11:20:11.550625  276679 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 11:20:11.741869  276679 api_server.go:165] Checking apiserver status ...
	I0601 11:20:11.741945  276679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 11:20:11.750554  276679 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 11:20:11.942776  276679 api_server.go:165] Checking apiserver status ...
	I0601 11:20:11.942855  276679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 11:20:11.951226  276679 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 11:20:12.142534  276679 api_server.go:165] Checking apiserver status ...
	I0601 11:20:12.142617  276679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 11:20:12.151065  276679 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 11:20:12.342354  276679 api_server.go:165] Checking apiserver status ...
	I0601 11:20:12.342429  276679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 11:20:12.350855  276679 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 11:20:12.542142  276679 api_server.go:165] Checking apiserver status ...
	I0601 11:20:12.542207  276679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 11:20:12.550615  276679 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 11:20:12.741824  276679 api_server.go:165] Checking apiserver status ...
	I0601 11:20:12.741894  276679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 11:20:12.750511  276679 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 11:20:12.750537  276679 api_server.go:165] Checking apiserver status ...
	I0601 11:20:12.750569  276679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 11:20:12.758099  276679 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 11:20:12.758124  276679 kubeadm.go:601] needs reconfigure: apiserver error: timed out waiting for the condition
	I0601 11:20:12.758131  276679 kubeadm.go:1092] stopping kube-system containers ...
	I0601 11:20:12.758146  276679 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I0601 11:20:12.758196  276679 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0601 11:20:12.782896  276679 cri.go:87] found id: "fec5fcb4aee4a111d94d448605edbdddd36bb33e1c2c06fad87be11f5157efdd"
	I0601 11:20:12.782918  276679 cri.go:87] found id: "313035e9674ffbf03bc7f81b4786fb43b0ffff7b5720ab0e88a6bb8f52d6087d"
	I0601 11:20:12.782924  276679 cri.go:87] found id: "f9746f111b56ac0244039e7b095ebf29999ccb20c1bf5e1ba484273bdb9e0d90"
	I0601 11:20:12.782931  276679 cri.go:87] found id: "0b15aeee4f5511a740a4f3f9ebbba9758b6b511b072da616c68a21c118c7790e"
	I0601 11:20:12.782936  276679 cri.go:87] found id: "627fd5c08820c1666f69a50ff3cc02a6eee73709048b46a0243143bf89dde787"
	I0601 11:20:12.782943  276679 cri.go:87] found id: "6ce85ae821e03fdd8bd07541eda8ea822ee62a21dc41c68bec58c5695d43fb44"
	I0601 11:20:12.782948  276679 cri.go:87] found id: ""
	I0601 11:20:12.782955  276679 cri.go:232] Stopping containers: [fec5fcb4aee4a111d94d448605edbdddd36bb33e1c2c06fad87be11f5157efdd 313035e9674ffbf03bc7f81b4786fb43b0ffff7b5720ab0e88a6bb8f52d6087d f9746f111b56ac0244039e7b095ebf29999ccb20c1bf5e1ba484273bdb9e0d90 0b15aeee4f5511a740a4f3f9ebbba9758b6b511b072da616c68a21c118c7790e 627fd5c08820c1666f69a50ff3cc02a6eee73709048b46a0243143bf89dde787 6ce85ae821e03fdd8bd07541eda8ea822ee62a21dc41c68bec58c5695d43fb44]
	I0601 11:20:12.782994  276679 ssh_runner.go:195] Run: which crictl
	I0601 11:20:12.785799  276679 ssh_runner.go:195] Run: sudo /usr/bin/crictl stop fec5fcb4aee4a111d94d448605edbdddd36bb33e1c2c06fad87be11f5157efdd 313035e9674ffbf03bc7f81b4786fb43b0ffff7b5720ab0e88a6bb8f52d6087d f9746f111b56ac0244039e7b095ebf29999ccb20c1bf5e1ba484273bdb9e0d90 0b15aeee4f5511a740a4f3f9ebbba9758b6b511b072da616c68a21c118c7790e 627fd5c08820c1666f69a50ff3cc02a6eee73709048b46a0243143bf89dde787 6ce85ae821e03fdd8bd07541eda8ea822ee62a21dc41c68bec58c5695d43fb44
	I0601 11:20:12.809504  276679 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0601 11:20:12.819061  276679 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0601 11:20:12.825913  276679 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5643 Jun  1 11:07 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5656 Jun  1 11:07 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2123 Jun  1 11:07 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5604 Jun  1 11:07 /etc/kubernetes/scheduler.conf
	
	I0601 11:20:12.825968  276679 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0601 11:20:10.219173  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:20:12.219371  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:20:13.243691  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:20:15.243798  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:20:12.832916  276679 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0601 11:20:12.839178  276679 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0601 11:20:12.845567  276679 kubeadm.go:166] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0601 11:20:12.845605  276679 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0601 11:20:12.851603  276679 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0601 11:20:12.857919  276679 kubeadm.go:166] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0601 11:20:12.857967  276679 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0601 11:20:12.864112  276679 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0601 11:20:12.870523  276679 kubeadm.go:703] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0601 11:20:12.870540  276679 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0601 11:20:12.912381  276679 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0601 11:20:13.433508  276679 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0601 11:20:13.566844  276679 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0601 11:20:13.617762  276679 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0601 11:20:13.686212  276679 api_server.go:51] waiting for apiserver process to appear ...
	I0601 11:20:13.686269  276679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:20:14.195273  276679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:20:14.695296  276679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:20:15.195457  276679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:20:15.695544  276679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:20:16.195542  276679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:20:16.695465  276679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:20:17.195333  276679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:20:17.694666  276679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:20:14.719337  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:20:17.218953  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:20:17.742741  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:20:20.244002  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:20:18.194692  276679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:20:18.694918  276679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:20:19.195623  276679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:20:19.695137  276679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:20:19.758656  276679 api_server.go:71] duration metric: took 6.072444993s to wait for apiserver process to appear ...
	I0601 11:20:19.758687  276679 api_server.go:87] waiting for apiserver healthz status ...
	I0601 11:20:19.758700  276679 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8444/healthz ...
	I0601 11:20:22.369047  276679 api_server.go:266] https://192.168.49.2:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0601 11:20:22.369078  276679 api_server.go:102] status: https://192.168.49.2:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0601 11:20:19.718920  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:20:21.719314  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:20:23.719804  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:20:22.869917  276679 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8444/healthz ...
	I0601 11:20:22.874561  276679 api_server.go:266] https://192.168.49.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0601 11:20:22.874589  276679 api_server.go:102] status: https://192.168.49.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0601 11:20:23.370203  276679 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8444/healthz ...
	I0601 11:20:23.375048  276679 api_server.go:266] https://192.168.49.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0601 11:20:23.375073  276679 api_server.go:102] status: https://192.168.49.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0601 11:20:23.869242  276679 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8444/healthz ...
	I0601 11:20:23.874012  276679 api_server.go:266] https://192.168.49.2:8444/healthz returned 200:
	ok
	I0601 11:20:23.879941  276679 api_server.go:140] control plane version: v1.23.6
	I0601 11:20:23.879963  276679 api_server.go:130] duration metric: took 4.121269797s to wait for apiserver health ...
	I0601 11:20:23.879972  276679 cni.go:95] Creating CNI manager for ""
	I0601 11:20:23.879977  276679 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0601 11:20:23.882052  276679 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0601 11:20:22.743507  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:20:25.242700  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:20:23.883460  276679 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0601 11:20:23.886921  276679 cni.go:189] applying CNI manifest using /var/lib/minikube/binaries/v1.23.6/kubectl ...
	I0601 11:20:23.886945  276679 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0601 11:20:23.899955  276679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0601 11:20:24.544438  276679 system_pods.go:43] waiting for kube-system pods to appear ...
	I0601 11:20:24.550979  276679 system_pods.go:59] 9 kube-system pods found
	I0601 11:20:24.551015  276679 system_pods.go:61] "coredns-64897985d-9gcj2" [28e98fca-a88b-422d-9f4b-797b18a8ff7a] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
	I0601 11:20:24.551025  276679 system_pods.go:61] "etcd-default-k8s-different-port-20220601110654-6708" [3005e651-1349-4d5e-b06f-e0fac3064ccf] Running
	I0601 11:20:24.551035  276679 system_pods.go:61] "kindnet-7fspq" [eefcd8e6-51e4-4d48-a420-93f4b47cf732] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0601 11:20:24.551042  276679 system_pods.go:61] "kube-apiserver-default-k8s-different-port-20220601110654-6708" [974fafdd-9176-4d97-acd7-9874d63b4987] Running
	I0601 11:20:24.551053  276679 system_pods.go:61] "kube-controller-manager-default-k8s-different-port-20220601110654-6708" [38b2c1a1-9a1a-4a1f-9fac-904e47d545be] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0601 11:20:24.551066  276679 system_pods.go:61] "kube-proxy-slzcl" [a1a6237f-6142-4e31-8bd4-72afd4f8a4c8] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0601 11:20:24.551083  276679 system_pods.go:61] "kube-scheduler-default-k8s-different-port-20220601110654-6708" [42ce6176-36e5-46bc-a443-19e4ca958785] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0601 11:20:24.551092  276679 system_pods.go:61] "metrics-server-b955d9d8-2k9wk" [fbc457b5-c359-4b84-abe5-d488874181f4] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
	I0601 11:20:24.551102  276679 system_pods.go:61] "storage-provisioner" [48086474-3417-47ff-970d-f7cf7806983b] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
	I0601 11:20:24.551112  276679 system_pods.go:74] duration metric: took 6.652373ms to wait for pod list to return data ...
	I0601 11:20:24.551126  276679 node_conditions.go:102] verifying NodePressure condition ...
	I0601 11:20:24.553819  276679 node_conditions.go:122] node storage ephemeral capacity is 304695084Ki
	I0601 11:20:24.553843  276679 node_conditions.go:123] node cpu capacity is 8
	I0601 11:20:24.553854  276679 node_conditions.go:105] duration metric: took 2.721044ms to run NodePressure ...
	I0601 11:20:24.553869  276679 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0601 11:20:24.680194  276679 kubeadm.go:762] waiting for restarted kubelet to initialise ...
	I0601 11:20:24.683686  276679 kubeadm.go:777] kubelet initialised
	I0601 11:20:24.683708  276679 kubeadm.go:778] duration metric: took 3.487172ms waiting for restarted kubelet to initialise ...
	I0601 11:20:24.683715  276679 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0601 11:20:24.689167  276679 pod_ready.go:78] waiting up to 4m0s for pod "coredns-64897985d-9gcj2" in "kube-system" namespace to be "Ready" ...
	I0601 11:20:26.694484  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:20:26.219205  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:20:28.219317  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:20:27.243486  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:20:29.742717  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:20:31.742800  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:20:28.695017  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:20:30.695110  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:20:32.695566  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:20:30.219646  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:20:32.719074  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:20:34.242643  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:20:36.243891  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:20:35.195305  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:20:37.197596  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:20:35.219473  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:20:37.719336  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:20:38.243963  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:20:40.743349  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:20:39.695270  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:20:42.195160  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:20:40.218932  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:20:42.719276  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:20:42.743398  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:20:45.243686  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:20:44.694661  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:20:46.695274  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:20:45.219350  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:20:47.719698  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:20:47.742813  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:20:50.244047  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:20:48.696514  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:20:51.195247  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:20:50.218967  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:20:52.219422  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:20:52.743394  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:20:54.743515  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:20:53.694370  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:20:55.694640  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:20:57.695171  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:20:54.719514  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:20:57.219033  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:20:57.242819  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:20:59.243739  270029 pod_ready.go:102] pod "coredns-64897985d-9dpfv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:04:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:20:59.739945  270029 pod_ready.go:81] duration metric: took 4m0.002166585s waiting for pod "coredns-64897985d-9dpfv" in "kube-system" namespace to be "Ready" ...
	E0601 11:20:59.739968  270029 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "coredns-64897985d-9dpfv" in "kube-system" namespace to be "Ready" (will not retry!)
	I0601 11:20:59.739995  270029 pod_ready.go:38] duration metric: took 4m0.008917217s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0601 11:20:59.740018  270029 kubeadm.go:630] restartCluster took 4m15.707393707s
	W0601 11:20:59.740131  270029 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0601 11:20:59.740156  270029 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I0601 11:21:01.430762  270029 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force": (1.690579833s)
	I0601 11:21:01.430838  270029 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0601 11:21:01.440364  270029 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0601 11:21:01.447145  270029 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0601 11:21:01.447194  270029 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0601 11:21:01.453852  270029 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0601 11:21:01.453891  270029 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0601 11:21:01.701224  270029 out.go:204]   - Generating certificates and keys ...
	I0601 11:21:00.194872  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:21:02.195437  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:20:59.219067  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:21:01.219719  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:21:03.719181  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:21:02.294583  270029 out.go:204]   - Booting up control plane ...
	I0601 11:21:04.694423  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:21:06.695087  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:21:05.719516  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:21:07.719966  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:21:09.195174  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:21:11.694583  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:21:10.218984  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:21:12.219075  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:21:14.337355  270029 out.go:204]   - Configuring RBAC rules ...
	I0601 11:21:14.750718  270029 cni.go:95] Creating CNI manager for ""
	I0601 11:21:14.750741  270029 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0601 11:21:14.752905  270029 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0601 11:21:14.754285  270029 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0601 11:21:14.758047  270029 cni.go:189] applying CNI manifest using /var/lib/minikube/binaries/v1.23.6/kubectl ...
	I0601 11:21:14.758065  270029 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0601 11:21:14.771201  270029 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0601 11:21:15.434277  270029 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0601 11:21:15.434380  270029 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:21:15.434381  270029 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl label nodes minikube.k8s.io/version=v1.26.0-beta.1 minikube.k8s.io/commit=4a356b2b7b41c6be3e1e342298908c27bb98ce92 minikube.k8s.io/name=embed-certs-20220601110327-6708 minikube.k8s.io/updated_at=2022_06_01T11_21_15_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:21:15.489119  270029 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:21:15.489208  270029 ops.go:34] apiserver oom_adj: -16
	I0601 11:21:16.079192  270029 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:21:16.579319  270029 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:21:14.194681  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:21:16.694557  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:21:14.219440  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:21:16.719363  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:21:17.079349  270029 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:21:17.579548  270029 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:21:18.079683  270029 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:21:18.579186  270029 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:21:19.079819  270029 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:21:19.579346  270029 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:21:20.079183  270029 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:21:20.579984  270029 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:21:21.079335  270029 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:21:21.579766  270029 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:21:18.694796  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:21:21.194627  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:21:19.218867  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:21:21.219185  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:21:23.719814  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:21:22.079321  270029 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:21:22.579993  270029 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:21:23.079856  270029 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:21:23.579743  270029 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:21:24.079256  270029 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:21:24.579276  270029 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:21:25.079828  270029 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:21:25.579763  270029 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:21:26.080068  270029 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:21:26.579388  270029 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:21:23.694527  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:21:25.694996  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:21:27.079269  270029 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:21:27.579729  270029 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:21:27.636171  270029 kubeadm.go:1045] duration metric: took 12.201851278s to wait for elevateKubeSystemPrivileges.
	I0601 11:21:27.636205  270029 kubeadm.go:397] StartCluster complete in 4m43.646757592s
	I0601 11:21:27.636227  270029 settings.go:142] acquiring lock: {Name:mk20a847233fa50399ab0a24280bffb8d8dbd41b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0601 11:21:27.636334  270029 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig
	I0601 11:21:27.637880  270029 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig: {Name:mkb0e16236e54fbc8651999f1dd70854c53de7db Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0601 11:21:28.157076  270029 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "embed-certs-20220601110327-6708" rescaled to 1
	I0601 11:21:28.157150  270029 start.go:208] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0601 11:21:28.157180  270029 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0601 11:21:28.159818  270029 out.go:177] * Verifying Kubernetes components...
	I0601 11:21:28.157185  270029 addons.go:415] enableAddons start: toEnable=map[dashboard:true metrics-server:true], additional=[]
	I0601 11:21:28.157406  270029 config.go:178] Loaded profile config "embed-certs-20220601110327-6708": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.6
	I0601 11:21:28.161484  270029 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0601 11:21:28.161496  270029 addons.go:65] Setting metrics-server=true in profile "embed-certs-20220601110327-6708"
	I0601 11:21:28.161511  270029 addons.go:153] Setting addon metrics-server=true in "embed-certs-20220601110327-6708"
	W0601 11:21:28.161523  270029 addons.go:165] addon metrics-server should already be in state true
	I0601 11:21:28.161537  270029 addons.go:65] Setting dashboard=true in profile "embed-certs-20220601110327-6708"
	I0601 11:21:28.161566  270029 addons.go:153] Setting addon dashboard=true in "embed-certs-20220601110327-6708"
	I0601 11:21:28.161573  270029 host.go:66] Checking if "embed-certs-20220601110327-6708" exists ...
	W0601 11:21:28.161579  270029 addons.go:165] addon dashboard should already be in state true
	I0601 11:21:28.161483  270029 addons.go:65] Setting storage-provisioner=true in profile "embed-certs-20220601110327-6708"
	I0601 11:21:28.161622  270029 addons.go:153] Setting addon storage-provisioner=true in "embed-certs-20220601110327-6708"
	W0601 11:21:28.161631  270029 addons.go:165] addon storage-provisioner should already be in state true
	I0601 11:21:28.161636  270029 host.go:66] Checking if "embed-certs-20220601110327-6708" exists ...
	I0601 11:21:28.161669  270029 host.go:66] Checking if "embed-certs-20220601110327-6708" exists ...
	I0601 11:21:28.161500  270029 addons.go:65] Setting default-storageclass=true in profile "embed-certs-20220601110327-6708"
	I0601 11:21:28.161709  270029 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-20220601110327-6708"
	I0601 11:21:28.161949  270029 cli_runner.go:164] Run: docker container inspect embed-certs-20220601110327-6708 --format={{.State.Status}}
	I0601 11:21:28.162094  270029 cli_runner.go:164] Run: docker container inspect embed-certs-20220601110327-6708 --format={{.State.Status}}
	I0601 11:21:28.162123  270029 cli_runner.go:164] Run: docker container inspect embed-certs-20220601110327-6708 --format={{.State.Status}}
	I0601 11:21:28.162229  270029 cli_runner.go:164] Run: docker container inspect embed-certs-20220601110327-6708 --format={{.State.Status}}
	I0601 11:21:28.209663  270029 out.go:177]   - Using image k8s.gcr.io/echoserver:1.4
	I0601 11:21:28.211523  270029 out.go:177]   - Using image kubernetesui/dashboard:v2.5.1
	I0601 11:21:28.213009  270029 addons.go:348] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0601 11:21:28.213030  270029 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0601 11:21:28.213079  270029 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220601110327-6708
	I0601 11:21:28.216922  270029 out.go:177]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I0601 11:21:28.218989  270029 addons.go:348] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0601 11:21:28.217201  270029 addons.go:153] Setting addon default-storageclass=true in "embed-certs-20220601110327-6708"
	W0601 11:21:28.219035  270029 addons.go:165] addon default-storageclass should already be in state true
	I0601 11:21:28.219075  270029 host.go:66] Checking if "embed-certs-20220601110327-6708" exists ...
	I0601 11:21:28.219579  270029 cli_runner.go:164] Run: docker container inspect embed-certs-20220601110327-6708 --format={{.State.Status}}
	I0601 11:21:28.219012  270029 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0601 11:21:28.219781  270029 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220601110327-6708
	I0601 11:21:28.236451  270029 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0601 11:21:26.218905  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:21:28.219209  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:21:28.238138  270029 addons.go:348] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0601 11:21:28.238163  270029 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0601 11:21:28.238217  270029 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220601110327-6708
	I0601 11:21:28.246850  270029 node_ready.go:35] waiting up to 6m0s for node "embed-certs-20220601110327-6708" to be "Ready" ...
	I0601 11:21:28.246885  270029 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0601 11:21:28.273680  270029 addons.go:348] installing /etc/kubernetes/addons/storageclass.yaml
	I0601 11:21:28.273707  270029 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0601 11:21:28.273761  270029 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220601110327-6708
	I0601 11:21:28.278846  270029 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49437 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/embed-certs-20220601110327-6708/id_rsa Username:docker}
	I0601 11:21:28.279320  270029 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49437 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/embed-certs-20220601110327-6708/id_rsa Username:docker}
	I0601 11:21:28.286384  270029 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49437 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/embed-certs-20220601110327-6708/id_rsa Username:docker}
	I0601 11:21:28.321729  270029 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49437 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/embed-certs-20220601110327-6708/id_rsa Username:docker}
	I0601 11:21:28.455756  270029 addons.go:348] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0601 11:21:28.455785  270029 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1849 bytes)
	I0601 11:21:28.466348  270029 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0601 11:21:28.469026  270029 addons.go:348] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0601 11:21:28.469067  270029 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0601 11:21:28.469486  270029 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0601 11:21:28.478076  270029 addons.go:348] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0601 11:21:28.478099  270029 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0601 11:21:28.487008  270029 addons.go:348] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0601 11:21:28.487036  270029 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0601 11:21:28.573106  270029 addons.go:348] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0601 11:21:28.573135  270029 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0601 11:21:28.574698  270029 start.go:806] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS
	I0601 11:21:28.577019  270029 addons.go:348] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0601 11:21:28.577042  270029 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0601 11:21:28.653936  270029 addons.go:348] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0601 11:21:28.653967  270029 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4196 bytes)
	I0601 11:21:28.658482  270029 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0601 11:21:28.671762  270029 addons.go:348] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0601 11:21:28.671808  270029 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0601 11:21:28.758424  270029 addons.go:348] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0601 11:21:28.758516  270029 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0601 11:21:28.776703  270029 addons.go:348] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0601 11:21:28.776735  270029 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0601 11:21:28.794636  270029 addons.go:348] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0601 11:21:28.794670  270029 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0601 11:21:28.959418  270029 addons.go:348] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0601 11:21:28.959449  270029 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0601 11:21:28.976465  270029 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0601 11:21:29.354605  270029 addons.go:386] Verifying addon metrics-server=true in "embed-certs-20220601110327-6708"
	I0601 11:21:29.699561  270029 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server, dashboard
	I0601 11:21:29.700807  270029 addons.go:417] enableAddons completed in 1.543631535s
	I0601 11:21:30.260215  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:21:28.196140  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:21:30.694688  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:21:32.695236  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:21:30.219534  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:21:32.219685  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:21:32.260412  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:21:34.760173  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:21:36.760442  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:21:35.195034  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:21:37.195304  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:21:34.718805  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:21:36.719108  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:21:38.760533  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:21:40.761060  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:21:39.694703  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:21:42.195994  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:21:39.219402  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:21:41.718982  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:21:43.719227  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:21:43.259684  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:21:45.260363  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:21:45.719329  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:21:47.719480  254820 node_ready.go:58] node "old-k8s-version-20220601105850-6708" has status "Ready":"False"
	I0601 11:21:47.721505  254820 node_ready.go:38] duration metric: took 4m0.008123732s waiting for node "old-k8s-version-20220601105850-6708" to be "Ready" ...
	I0601 11:21:47.723918  254820 out.go:177] 
	W0601 11:21:47.725406  254820 out.go:239] X Exiting due to GUEST_START: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: timed out waiting for the condition
	W0601 11:21:47.725423  254820 out.go:239] * 
	W0601 11:21:47.726098  254820 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0601 11:21:47.728001  254820 out.go:177] 
	I0601 11:21:44.695306  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:21:47.194624  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:21:47.760960  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:21:50.260784  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:21:49.195368  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:21:51.694946  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:21:52.760281  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:21:55.259912  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:21:54.194912  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:21:56.195652  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:21:57.259956  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:21:59.759755  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:22:01.759853  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:21:58.694995  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:22:01.194431  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:22:03.760721  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:22:06.260069  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:22:03.195297  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:22:05.694312  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:22:07.695082  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:22:08.260739  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:22:10.760237  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:22:10.194760  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:22:12.194885  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:22:13.259813  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:22:15.260153  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:22:14.195226  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:22:16.694528  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:22:17.260859  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:22:19.759997  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:22:21.760654  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:22:18.695235  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:22:21.194694  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:22:24.260433  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:22:26.760129  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:22:23.197530  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:22:25.695229  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:22:28.760717  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:22:31.260368  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:22:28.194771  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:22:30.195026  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:22:32.694696  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:22:33.760112  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:22:35.760758  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:22:34.694930  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:22:36.695375  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:22:38.260723  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:22:40.760393  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:22:39.194795  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:22:41.694750  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:22:43.259823  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:22:45.260551  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:22:44.195389  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:22:46.695489  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:22:47.760311  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:22:49.760404  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:22:49.194395  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:22:51.195245  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:22:52.260594  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:22:54.760044  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:22:56.760073  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:22:53.195327  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:22:55.694893  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:22:58.760157  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:23:01.260267  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:22:58.194547  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:23:00.694762  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:23:03.260561  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:23:05.260780  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:23:03.195176  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:23:05.694698  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:23:07.695208  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:23:07.760513  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:23:10.260326  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:23:10.195039  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:23:12.695240  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:23:12.260674  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:23:14.260918  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:23:16.760064  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:23:15.195155  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:23:17.195241  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:23:18.760686  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:23:21.260676  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:23:19.694620  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:23:21.694667  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:23:23.760024  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:23:26.259746  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:23:24.194510  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:23:26.194546  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:23:28.260714  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:23:30.760541  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:23:28.194917  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:23:30.694766  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:23:33.260035  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:23:35.261060  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:23:33.195328  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:23:35.694682  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:23:37.695340  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:23:37.760144  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:23:40.260334  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:23:40.194751  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:23:42.194853  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:23:42.759808  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:23:44.759997  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:23:46.760285  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:23:44.695010  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:23:46.695526  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:23:48.760374  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:23:51.260999  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:23:49.194307  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:23:51.195053  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:23:53.760587  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:23:56.260172  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:23:53.195339  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:23:55.695153  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:23:58.759799  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:24:00.760631  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:23:58.194738  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:24:00.195407  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:24:02.695048  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:24:03.260687  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:24:05.260722  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:24:04.695337  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:24:07.194665  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:24:07.760567  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:24:10.260596  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:24:09.195069  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:24:11.694328  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:24:12.260967  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:24:14.759793  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:24:16.760292  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:24:14.194996  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:24:16.694542  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:24:18.760531  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:24:20.760689  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:24:18.694668  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:24:20.695051  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:24:23.195952  276679 pod_ready.go:102] pod "coredns-64897985d-9gcj2" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-01 11:07:34 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0601 11:24:24.691928  276679 pod_ready.go:81] duration metric: took 4m0.002724634s waiting for pod "coredns-64897985d-9gcj2" in "kube-system" namespace to be "Ready" ...
	E0601 11:24:24.691955  276679 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "coredns-64897985d-9gcj2" in "kube-system" namespace to be "Ready" (will not retry!)
	I0601 11:24:24.691981  276679 pod_ready.go:38] duration metric: took 4m0.008258762s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0601 11:24:24.692005  276679 kubeadm.go:630] restartCluster took 4m14.973349857s
	W0601 11:24:24.692130  276679 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0601 11:24:24.692159  276679 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I0601 11:24:26.286416  276679 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force": (1.594228976s)
	I0601 11:24:26.286489  276679 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0601 11:24:26.296314  276679 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0601 11:24:26.303059  276679 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0601 11:24:26.303116  276679 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0601 11:24:26.309917  276679 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0601 11:24:26.309957  276679 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0601 11:24:22.761011  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:24:25.261206  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:24:26.556270  276679 out.go:204]   - Generating certificates and keys ...
	I0601 11:24:27.302083  276679 out.go:204]   - Booting up control plane ...
	I0601 11:24:27.261441  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:24:29.759885  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:24:32.260145  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:24:34.260990  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:24:36.760710  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:24:38.840585  276679 out.go:204]   - Configuring RBAC rules ...
	I0601 11:24:39.253770  276679 cni.go:95] Creating CNI manager for ""
	I0601 11:24:39.253791  276679 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0601 11:24:39.255739  276679 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0601 11:24:39.259837  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:24:41.260124  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:24:39.257207  276679 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0601 11:24:39.261207  276679 cni.go:189] applying CNI manifest using /var/lib/minikube/binaries/v1.23.6/kubectl ...
	I0601 11:24:39.261228  276679 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0601 11:24:39.273744  276679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0601 11:24:39.861493  276679 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0601 11:24:39.861573  276679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:24:39.861574  276679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl label nodes minikube.k8s.io/version=v1.26.0-beta.1 minikube.k8s.io/commit=4a356b2b7b41c6be3e1e342298908c27bb98ce92 minikube.k8s.io/name=default-k8s-different-port-20220601110654-6708 minikube.k8s.io/updated_at=2022_06_01T11_24_39_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:24:39.914842  276679 ops.go:34] apiserver oom_adj: -16
	I0601 11:24:39.914913  276679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:24:40.498901  276679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:24:40.998931  276679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:24:41.499031  276679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:24:41.998593  276679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:24:42.499160  276679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:24:43.260473  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:24:45.760870  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:24:42.998966  276679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:24:43.498638  276679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:24:43.998319  276679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:24:44.498531  276679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:24:44.998678  276679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:24:45.499193  276679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:24:45.998418  276679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:24:46.498985  276679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:24:46.998941  276679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:24:47.498945  276679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:24:48.260450  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:24:50.260933  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:24:47.999272  276679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:24:48.498439  276679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:24:48.999292  276679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:24:49.499272  276679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:24:49.998339  276679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:24:50.498332  276679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:24:50.999106  276679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:24:51.499296  276679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:24:51.998980  276679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:24:52.498623  276679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:24:52.998371  276679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:24:53.498515  276679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:24:53.594790  276679 kubeadm.go:1045] duration metric: took 13.733266896s to wait for elevateKubeSystemPrivileges.
	I0601 11:24:53.594820  276679 kubeadm.go:397] StartCluster complete in 4m43.917251881s
	I0601 11:24:53.594841  276679 settings.go:142] acquiring lock: {Name:mk20a847233fa50399ab0a24280bffb8d8dbd41b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0601 11:24:53.594938  276679 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig
	I0601 11:24:53.596907  276679 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig: {Name:mkb0e16236e54fbc8651999f1dd70854c53de7db Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0601 11:24:54.111475  276679 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "default-k8s-different-port-20220601110654-6708" rescaled to 1
	I0601 11:24:54.111547  276679 start.go:208] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8444 KubernetesVersion:v1.23.6 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0601 11:24:54.113711  276679 out.go:177] * Verifying Kubernetes components...
	I0601 11:24:54.111604  276679 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0601 11:24:54.111644  276679 addons.go:415] enableAddons start: toEnable=map[dashboard:true metrics-server:true], additional=[]
	I0601 11:24:54.111802  276679 config.go:178] Loaded profile config "default-k8s-different-port-20220601110654-6708": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.6
	I0601 11:24:54.115020  276679 addons.go:65] Setting storage-provisioner=true in profile "default-k8s-different-port-20220601110654-6708"
	I0601 11:24:54.115035  276679 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0601 11:24:54.115035  276679 addons.go:65] Setting metrics-server=true in profile "default-k8s-different-port-20220601110654-6708"
	I0601 11:24:54.115048  276679 addons.go:153] Setting addon storage-provisioner=true in "default-k8s-different-port-20220601110654-6708"
	I0601 11:24:54.115055  276679 addons.go:153] Setting addon metrics-server=true in "default-k8s-different-port-20220601110654-6708"
	W0601 11:24:54.115057  276679 addons.go:165] addon storage-provisioner should already be in state true
	W0601 11:24:54.115064  276679 addons.go:165] addon metrics-server should already be in state true
	I0601 11:24:54.115034  276679 addons.go:65] Setting default-storageclass=true in profile "default-k8s-different-port-20220601110654-6708"
	I0601 11:24:54.115103  276679 host.go:66] Checking if "default-k8s-different-port-20220601110654-6708" exists ...
	I0601 11:24:54.115109  276679 host.go:66] Checking if "default-k8s-different-port-20220601110654-6708" exists ...
	I0601 11:24:54.115112  276679 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-different-port-20220601110654-6708"
	I0601 11:24:54.115037  276679 addons.go:65] Setting dashboard=true in profile "default-k8s-different-port-20220601110654-6708"
	I0601 11:24:54.115134  276679 addons.go:153] Setting addon dashboard=true in "default-k8s-different-port-20220601110654-6708"
	W0601 11:24:54.115144  276679 addons.go:165] addon dashboard should already be in state true
	I0601 11:24:54.115176  276679 host.go:66] Checking if "default-k8s-different-port-20220601110654-6708" exists ...
	I0601 11:24:54.115416  276679 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220601110654-6708 --format={{.State.Status}}
	I0601 11:24:54.115596  276679 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220601110654-6708 --format={{.State.Status}}
	I0601 11:24:54.115611  276679 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220601110654-6708 --format={{.State.Status}}
	I0601 11:24:54.115615  276679 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220601110654-6708 --format={{.State.Status}}
	I0601 11:24:54.129176  276679 node_ready.go:35] waiting up to 6m0s for node "default-k8s-different-port-20220601110654-6708" to be "Ready" ...
	I0601 11:24:54.168194  276679 out.go:177]   - Using image kubernetesui/dashboard:v2.5.1
	I0601 11:24:54.169714  276679 out.go:177]   - Using image k8s.gcr.io/echoserver:1.4
	I0601 11:24:54.171144  276679 addons.go:348] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0601 11:24:54.170891  276679 addons.go:153] Setting addon default-storageclass=true in "default-k8s-different-port-20220601110654-6708"
	W0601 11:24:54.171181  276679 addons.go:165] addon default-storageclass should already be in state true
	I0601 11:24:54.171211  276679 host.go:66] Checking if "default-k8s-different-port-20220601110654-6708" exists ...
	I0601 11:24:54.171167  276679 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0601 11:24:54.171329  276679 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220601110654-6708
	I0601 11:24:54.171684  276679 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220601110654-6708 --format={{.State.Status}}
	I0601 11:24:54.176157  276679 out.go:177]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I0601 11:24:54.177770  276679 addons.go:348] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0601 11:24:54.177796  276679 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0601 11:24:54.179131  276679 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0601 11:24:54.177859  276679 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220601110654-6708
	I0601 11:24:54.180787  276679 addons.go:348] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0601 11:24:54.180809  276679 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0601 11:24:54.180855  276679 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220601110654-6708
	I0601 11:24:54.233206  276679 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49442 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/default-k8s-different-port-20220601110654-6708/id_rsa Username:docker}
	I0601 11:24:54.240234  276679 addons.go:348] installing /etc/kubernetes/addons/storageclass.yaml
	I0601 11:24:54.240263  276679 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0601 11:24:54.240311  276679 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220601110654-6708
	I0601 11:24:54.240743  276679 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49442 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/default-k8s-different-port-20220601110654-6708/id_rsa Username:docker}
	I0601 11:24:54.242497  276679 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49442 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/default-k8s-different-port-20220601110654-6708/id_rsa Username:docker}
	I0601 11:24:54.255476  276679 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0601 11:24:54.289597  276679 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49442 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/default-k8s-different-port-20220601110654-6708/id_rsa Username:docker}
	I0601 11:24:54.510589  276679 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0601 11:24:54.510747  276679 addons.go:348] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0601 11:24:54.510770  276679 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1849 bytes)
	I0601 11:24:54.556919  276679 addons.go:348] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0601 11:24:54.556950  276679 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0601 11:24:54.566012  276679 addons.go:348] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0601 11:24:54.566042  276679 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0601 11:24:54.569528  276679 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0601 11:24:54.576575  276679 addons.go:348] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0601 11:24:54.576625  276679 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0601 11:24:54.654525  276679 addons.go:348] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0601 11:24:54.654551  276679 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0601 11:24:54.655296  276679 addons.go:348] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0601 11:24:54.655319  276679 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0601 11:24:54.661290  276679 start.go:806] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS
	I0601 11:24:54.671592  276679 addons.go:348] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0601 11:24:54.671621  276679 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4196 bytes)
	I0601 11:24:54.673696  276679 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0601 11:24:54.687107  276679 addons.go:348] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0601 11:24:54.687133  276679 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0601 11:24:54.768961  276679 addons.go:348] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0601 11:24:54.768989  276679 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0601 11:24:54.854363  276679 addons.go:348] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0601 11:24:54.854399  276679 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0601 11:24:54.870735  276679 addons.go:348] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0601 11:24:54.870762  276679 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0601 11:24:54.888031  276679 addons.go:348] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0601 11:24:54.888063  276679 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0601 11:24:54.967082  276679 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0601 11:24:55.273650  276679 addons.go:386] Verifying addon metrics-server=true in "default-k8s-different-port-20220601110654-6708"
	I0601 11:24:55.661065  276679 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I0601 11:24:52.261071  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:24:54.261578  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:24:56.760078  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:24:55.662561  276679 addons.go:417] enableAddons completed in 1.550935677s
	I0601 11:24:56.136034  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:24:58.760245  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:25:00.760344  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:24:58.136131  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:25:00.136759  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:25:02.636409  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:25:03.260144  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:25:05.260531  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:25:05.136779  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:25:07.635969  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:25:07.760027  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:25:09.760904  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:25:10.136336  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:25:12.636564  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:25:12.260100  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:25:14.759992  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:25:16.760260  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:25:14.636694  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:25:17.137058  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:25:19.260136  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:25:21.260700  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:25:19.636331  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:25:22.136010  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:25:23.760875  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:25:26.261082  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:25:24.136501  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:25:26.636646  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:25:28.263320  270029 node_ready.go:58] node "embed-certs-20220601110327-6708" has status "Ready":"False"
	I0601 11:25:28.263343  270029 node_ready.go:38] duration metric: took 4m0.016466534s waiting for node "embed-certs-20220601110327-6708" to be "Ready" ...
	I0601 11:25:28.265930  270029 out.go:177] 
	W0601 11:25:28.267524  270029 out.go:239] X Exiting due to GUEST_START: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: timed out waiting for the condition
	W0601 11:25:28.267549  270029 out.go:239] * 
	W0601 11:25:28.268404  270029 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0601 11:25:28.269962  270029 out.go:177] 
	I0601 11:25:28.637161  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:25:31.135894  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:25:33.136655  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:25:35.635923  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:25:37.636131  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:25:39.636319  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:25:42.136004  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:25:44.136847  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:25:46.636704  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:25:49.136203  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:25:51.136808  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:25:53.636402  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:25:56.135580  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:25:58.135934  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:26:00.136698  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:26:02.136807  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:26:04.636360  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:26:07.136003  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:26:09.136403  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:26:11.636023  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:26:13.636284  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:26:16.136059  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:26:18.635976  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:26:20.636471  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:26:23.136420  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:26:25.635898  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:26:27.636092  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:26:29.636223  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:26:32.135814  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:26:34.136208  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:26:36.136320  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:26:38.635965  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:26:41.136884  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:26:43.636083  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:26:46.136237  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:26:48.635722  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:26:51.135780  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:26:53.136057  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:26:55.136925  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:26:57.636578  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:27:00.135989  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:27:02.136086  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:27:04.136153  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:27:06.635746  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:27:08.636054  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:27:10.636582  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:27:13.136118  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:27:15.137042  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:27:17.636192  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:27:20.136181  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:27:22.136256  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:27:24.136756  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:27:26.636114  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:27:28.636414  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:27:31.136248  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:27:33.136847  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:27:35.635813  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:27:37.636126  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:27:39.636375  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:27:42.136175  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:27:44.636682  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:27:47.135843  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:27:49.136252  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:27:51.137073  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:27:53.636035  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:27:55.636279  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:27:58.136943  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:28:00.635664  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:28:02.636502  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:28:04.638145  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:28:07.136842  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:28:09.636372  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:28:12.136048  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:28:14.136569  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:28:16.635705  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:28:18.636532  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:28:21.136177  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:28:23.636753  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:28:26.136524  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:28:28.635691  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:28:30.636561  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:28:33.136478  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:28:35.636196  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:28:38.137078  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:28:40.636164  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:28:42.636749  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:28:45.136427  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:28:47.636180  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:28:49.636861  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:28:52.136563  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:28:54.136714  276679 node_ready.go:58] node "default-k8s-different-port-20220601110654-6708" has status "Ready":"False"
	I0601 11:28:54.138823  276679 node_ready.go:38] duration metric: took 4m0.0096115s waiting for node "default-k8s-different-port-20220601110654-6708" to be "Ready" ...
	I0601 11:28:54.141397  276679 out.go:177] 
	W0601 11:28:54.143025  276679 out.go:239] X Exiting due to GUEST_START: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: timed out waiting for the condition
	W0601 11:28:54.143041  276679 out.go:239] * 
	W0601 11:28:54.143750  276679 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0601 11:28:54.145729  276679 out.go:177] 
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID
	bf885aba43938       6de166512aa22       58 seconds ago      Exited              kindnet-cni               7                   df3be3bbc5f79
	2fb746cc75b1d       4c03754524064       13 minutes ago      Running             kube-proxy                0                   e819a7c456c7c
	dd66fe479b71f       595f327f224a4       13 minutes ago      Running             kube-scheduler            2                   c74ba4ef859aa
	7d3ead15d6ba2       25f8c7f3da61c       13 minutes ago      Running             etcd                      2                   d5f8156c990b4
	d21e78271b81a       df7b72818ad2e       13 minutes ago      Running             kube-controller-manager   2                   ee67c136c178d
	a01c09dc992a3       8fa62c12256df       13 minutes ago      Running             kube-apiserver            2                   36abb2c184cf8
	
	* 
	* ==> containerd <==
	* -- Logs begin at Wed 2022-06-01 11:19:53 UTC, end at Wed 2022-06-01 11:37:57 UTC. --
	Jun 01 11:28:51 default-k8s-different-port-20220601110654-6708 containerd[390]: time="2022-06-01T11:28:51.490544193Z" level=warning msg="cleaning up after shim disconnected" id=7c21d514121895b0dbb3e8edace6db1999e4a2588c3d100ca15a3d28276ae8a3 namespace=k8s.io
	Jun 01 11:28:51 default-k8s-different-port-20220601110654-6708 containerd[390]: time="2022-06-01T11:28:51.490563854Z" level=info msg="cleaning up dead shim"
	Jun 01 11:28:51 default-k8s-different-port-20220601110654-6708 containerd[390]: time="2022-06-01T11:28:51.499331935Z" level=warning msg="cleanup warnings time=\"2022-06-01T11:28:51Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4351 runtime=io.containerd.runc.v2\n"
	Jun 01 11:28:51 default-k8s-different-port-20220601110654-6708 containerd[390]: time="2022-06-01T11:28:51.693814558Z" level=info msg="RemoveContainer for \"52ffc7dba8d4e7d84f0ad6c1fb023e6358ed37847f4efb6b4426796aa9cc6f30\""
	Jun 01 11:28:51 default-k8s-different-port-20220601110654-6708 containerd[390]: time="2022-06-01T11:28:51.698952344Z" level=info msg="RemoveContainer for \"52ffc7dba8d4e7d84f0ad6c1fb023e6358ed37847f4efb6b4426796aa9cc6f30\" returns successfully"
	Jun 01 11:31:44 default-k8s-different-port-20220601110654-6708 containerd[390]: time="2022-06-01T11:31:44.180840472Z" level=info msg="CreateContainer within sandbox \"df3be3bbc5f79542be5bca9d7d7637b0cac5b8ac05520962d10fb8e4166ec4b9\" for container &ContainerMetadata{Name:kindnet-cni,Attempt:6,}"
	Jun 01 11:31:44 default-k8s-different-port-20220601110654-6708 containerd[390]: time="2022-06-01T11:31:44.192429549Z" level=info msg="CreateContainer within sandbox \"df3be3bbc5f79542be5bca9d7d7637b0cac5b8ac05520962d10fb8e4166ec4b9\" for &ContainerMetadata{Name:kindnet-cni,Attempt:6,} returns container id \"c6a18538a13a4b879f6715e47322383910efd40776c8b8680e8d2b9b9189ccc0\""
	Jun 01 11:31:44 default-k8s-different-port-20220601110654-6708 containerd[390]: time="2022-06-01T11:31:44.192817713Z" level=info msg="StartContainer for \"c6a18538a13a4b879f6715e47322383910efd40776c8b8680e8d2b9b9189ccc0\""
	Jun 01 11:31:44 default-k8s-different-port-20220601110654-6708 containerd[390]: time="2022-06-01T11:31:44.269123198Z" level=info msg="StartContainer for \"c6a18538a13a4b879f6715e47322383910efd40776c8b8680e8d2b9b9189ccc0\" returns successfully"
	Jun 01 11:31:54 default-k8s-different-port-20220601110654-6708 containerd[390]: time="2022-06-01T11:31:54.492951623Z" level=info msg="shim disconnected" id=c6a18538a13a4b879f6715e47322383910efd40776c8b8680e8d2b9b9189ccc0
	Jun 01 11:31:54 default-k8s-different-port-20220601110654-6708 containerd[390]: time="2022-06-01T11:31:54.493011898Z" level=warning msg="cleaning up after shim disconnected" id=c6a18538a13a4b879f6715e47322383910efd40776c8b8680e8d2b9b9189ccc0 namespace=k8s.io
	Jun 01 11:31:54 default-k8s-different-port-20220601110654-6708 containerd[390]: time="2022-06-01T11:31:54.493021570Z" level=info msg="cleaning up dead shim"
	Jun 01 11:31:54 default-k8s-different-port-20220601110654-6708 containerd[390]: time="2022-06-01T11:31:54.501638937Z" level=warning msg="cleanup warnings time=\"2022-06-01T11:31:54Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4685 runtime=io.containerd.runc.v2\n"
	Jun 01 11:31:55 default-k8s-different-port-20220601110654-6708 containerd[390]: time="2022-06-01T11:31:55.010512220Z" level=info msg="RemoveContainer for \"7c21d514121895b0dbb3e8edace6db1999e4a2588c3d100ca15a3d28276ae8a3\""
	Jun 01 11:31:55 default-k8s-different-port-20220601110654-6708 containerd[390]: time="2022-06-01T11:31:55.015135530Z" level=info msg="RemoveContainer for \"7c21d514121895b0dbb3e8edace6db1999e4a2588c3d100ca15a3d28276ae8a3\" returns successfully"
	Jun 01 11:36:59 default-k8s-different-port-20220601110654-6708 containerd[390]: time="2022-06-01T11:36:59.180630284Z" level=info msg="CreateContainer within sandbox \"df3be3bbc5f79542be5bca9d7d7637b0cac5b8ac05520962d10fb8e4166ec4b9\" for container &ContainerMetadata{Name:kindnet-cni,Attempt:7,}"
	Jun 01 11:36:59 default-k8s-different-port-20220601110654-6708 containerd[390]: time="2022-06-01T11:36:59.193569204Z" level=info msg="CreateContainer within sandbox \"df3be3bbc5f79542be5bca9d7d7637b0cac5b8ac05520962d10fb8e4166ec4b9\" for &ContainerMetadata{Name:kindnet-cni,Attempt:7,} returns container id \"bf885aba43938d2866c4c409ae2e2855b8ad06978936ae07c278c45a34746ce4\""
	Jun 01 11:36:59 default-k8s-different-port-20220601110654-6708 containerd[390]: time="2022-06-01T11:36:59.194122537Z" level=info msg="StartContainer for \"bf885aba43938d2866c4c409ae2e2855b8ad06978936ae07c278c45a34746ce4\""
	Jun 01 11:36:59 default-k8s-different-port-20220601110654-6708 containerd[390]: time="2022-06-01T11:36:59.272488832Z" level=info msg="StartContainer for \"bf885aba43938d2866c4c409ae2e2855b8ad06978936ae07c278c45a34746ce4\" returns successfully"
	Jun 01 11:37:09 default-k8s-different-port-20220601110654-6708 containerd[390]: time="2022-06-01T11:37:09.497422288Z" level=info msg="shim disconnected" id=bf885aba43938d2866c4c409ae2e2855b8ad06978936ae07c278c45a34746ce4
	Jun 01 11:37:09 default-k8s-different-port-20220601110654-6708 containerd[390]: time="2022-06-01T11:37:09.497488681Z" level=warning msg="cleaning up after shim disconnected" id=bf885aba43938d2866c4c409ae2e2855b8ad06978936ae07c278c45a34746ce4 namespace=k8s.io
	Jun 01 11:37:09 default-k8s-different-port-20220601110654-6708 containerd[390]: time="2022-06-01T11:37:09.497498840Z" level=info msg="cleaning up dead shim"
	Jun 01 11:37:09 default-k8s-different-port-20220601110654-6708 containerd[390]: time="2022-06-01T11:37:09.506245847Z" level=warning msg="cleanup warnings time=\"2022-06-01T11:37:09Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4788 runtime=io.containerd.runc.v2\n"
	Jun 01 11:37:09 default-k8s-different-port-20220601110654-6708 containerd[390]: time="2022-06-01T11:37:09.544967838Z" level=info msg="RemoveContainer for \"c6a18538a13a4b879f6715e47322383910efd40776c8b8680e8d2b9b9189ccc0\""
	Jun 01 11:37:09 default-k8s-different-port-20220601110654-6708 containerd[390]: time="2022-06-01T11:37:09.549160624Z" level=info msg="RemoveContainer for \"c6a18538a13a4b879f6715e47322383910efd40776c8b8680e8d2b9b9189ccc0\" returns successfully"
	
	* 
	* ==> describe nodes <==
	* Name:               default-k8s-different-port-20220601110654-6708
	Roles:              control-plane,master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-different-port-20220601110654-6708
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=4a356b2b7b41c6be3e1e342298908c27bb98ce92
	                    minikube.k8s.io/name=default-k8s-different-port-20220601110654-6708
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2022_06_01T11_24_39_0700
	                    minikube.k8s.io/version=v1.26.0-beta.1
	                    node-role.kubernetes.io/control-plane=
	                    node-role.kubernetes.io/master=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 01 Jun 2022 11:24:36 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-different-port-20220601110654-6708
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 01 Jun 2022 11:37:52 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 01 Jun 2022 11:35:07 +0000   Wed, 01 Jun 2022 11:24:34 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 01 Jun 2022 11:35:07 +0000   Wed, 01 Jun 2022 11:24:34 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 01 Jun 2022 11:35:07 +0000   Wed, 01 Jun 2022 11:24:34 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Wed, 01 Jun 2022 11:35:07 +0000   Wed, 01 Jun 2022 11:24:34 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    default-k8s-different-port-20220601110654-6708
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304695084Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32873824Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304695084Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32873824Ki
	  pods:               110
	System Info:
	  Machine ID:                 e0d7477b601740b2a7c32c13851e505c
	  System UUID:                c3073178-0849-48bb-88da-ba72ab8c4ba0
	  Boot ID:                    6ec46557-2d76-4d83-8353-3ee04fd961c4
	  Kernel Version:             5.13.0-1027-gcp
	  OS Image:                   Ubuntu 20.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.6.4
	  Kubelet Version:            v1.23.6
	  Kube-Proxy Version:         v1.23.6
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                                                      CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                                      ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-default-k8s-different-port-20220601110654-6708                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (0%!)(MISSING)       0 (0%!)(MISSING)         13m
	  kube-system                 kindnet-bzkn8                                                             100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      13m
	  kube-system                 kube-apiserver-default-k8s-different-port-20220601110654-6708             250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-controller-manager-default-k8s-different-port-20220601110654-6708    200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-proxy-nfvrv                                                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-scheduler-default-k8s-different-port-20220601110654-6708             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%!)(MISSING)   100m (1%!)(MISSING)
	  memory             150Mi (0%!)(MISSING)  50Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From        Message
	  ----    ------                   ----  ----        -------
	  Normal  Starting                 13m   kube-proxy  
	  Normal  Starting                 13m   kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  13m   kubelet     Node default-k8s-different-port-20220601110654-6708 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m   kubelet     Node default-k8s-different-port-20220601110654-6708 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m   kubelet     Node default-k8s-different-port-20220601110654-6708 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  13m   kubelet     Updated Node Allocatable limit across pods
	
	* 
	* ==> dmesg <==
	* [Jun 1 10:57] IPv4: martian source 10.244.0.2 from 10.244.0.2, on dev bridge
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff ce 23 50 3b c9 fd 08 06
	[  +0.595787] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ce 23 50 3b c9 fd 08 06
	[  +0.586201] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 66 8a d2 ee 2a 9a 08 06
	[Jun 1 10:58] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 9a c7 83 3b 8c 1d 08 06
	[  +0.000302] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 66 8a d2 ee 2a 9a 08 06
	[ +12.725641] IPv4: martian source 10.244.0.2 from 10.244.0.2, on dev bridge
	[  +0.000013] ll header: 00000000: ff ff ff ff ff ff 3a 82 da d0 8c 8d 08 06
	[  +0.906380] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 3a 82 da d0 8c 8d 08 06
	[  +0.302806] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff aa d1 05 1a ef 66 08 06
	[ +13.606547] IPv4: martian source 10.85.0.2 from 10.85.0.2, on dev cni0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 7e ad f3 21 27 5b 08 06
	[  +8.626375] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff f6 c1 ef be 01 dd 08 06
	[  +0.000348] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff aa d1 05 1a ef 66 08 06
	[  +8.278909] IPv4: martian source 10.85.0.2 from 10.85.0.2, on dev cni0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 72 fb cd eb 09 11 08 06
	[Jun 1 11:01] IPv4: martian destination 127.0.0.11 from 10.244.0.2, dev vethb04b9442
	
	* 
	* ==> etcd [7d3ead15d6ba2e4b8c432e1081c87bd87496d8d69e3abb714f29c65bba94ebdf] <==
	* {"level":"info","ts":"2022-06-01T11:24:33.674Z","caller":"embed/etcd.go:687","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2022-06-01T11:24:33.674Z","caller":"embed/etcd.go:580","msg":"serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2022-06-01T11:24:33.674Z","caller":"embed/etcd.go:552","msg":"cmux::serve","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2022-06-01T11:24:33.674Z","caller":"embed/etcd.go:276","msg":"now serving peer/client/metrics","local-member-id":"aec36adc501070cc","initial-advertise-peer-urls":["https://192.168.49.2:2380"],"listen-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.49.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2022-06-01T11:24:33.674Z","caller":"embed/etcd.go:762","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2022-06-01T11:24:34.464Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 1"}
	{"level":"info","ts":"2022-06-01T11:24:34.464Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 1"}
	{"level":"info","ts":"2022-06-01T11:24:34.464Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 1"}
	{"level":"info","ts":"2022-06-01T11:24:34.465Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 2"}
	{"level":"info","ts":"2022-06-01T11:24:34.465Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2022-06-01T11:24:34.465Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 2"}
	{"level":"info","ts":"2022-06-01T11:24:34.465Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"}
	{"level":"info","ts":"2022-06-01T11:24:34.465Z","caller":"etcdserver/server.go:2476","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2022-06-01T11:24:34.466Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2022-06-01T11:24:34.466Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2022-06-01T11:24:34.466Z","caller":"etcdserver/server.go:2500","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2022-06-01T11:24:34.466Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-06-01T11:24:34.466Z","caller":"etcdmain/main.go:47","msg":"notifying init daemon"}
	{"level":"info","ts":"2022-06-01T11:24:34.466Z","caller":"etcdmain/main.go:53","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2022-06-01T11:24:34.466Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-06-01T11:24:34.467Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2022-06-01T11:24:34.468Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2022-06-01T11:24:34.466Z","caller":"etcdserver/server.go:2027","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:default-k8s-different-port-20220601110654-6708 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2022-06-01T11:34:34.479Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":685}
	{"level":"info","ts":"2022-06-01T11:34:34.480Z","caller":"mvcc/kvstore_compaction.go:57","msg":"finished scheduled compaction","compact-revision":685,"took":"824.861µs"}
	
	* 
	* ==> kernel <==
	*  11:37:57 up  1:20,  0 users,  load average: 0.03, 0.24, 0.91
	Linux default-k8s-different-port-20220601110654-6708 5.13.0-1027-gcp #32~20.04.1-Ubuntu SMP Thu May 26 10:53:08 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.4 LTS"
	
	* 
	* ==> kube-apiserver [a01c09dc992a3fcb76c065eaf6d9a37f822bb84514f98be837fc943d82bc46d3] <==
	* I0601 11:27:56.163065       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0601 11:29:37.329857       1 handler_proxy.go:104] no RequestInfo found in the context
	E0601 11:29:37.329934       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0601 11:29:37.329942       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0601 11:30:37.330451       1 handler_proxy.go:104] no RequestInfo found in the context
	E0601 11:30:37.330503       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0601 11:30:37.330514       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0601 11:32:37.331164       1 handler_proxy.go:104] no RequestInfo found in the context
	E0601 11:32:37.331239       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0601 11:32:37.331247       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0601 11:34:37.335301       1 handler_proxy.go:104] no RequestInfo found in the context
	E0601 11:34:37.335379       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0601 11:34:37.335389       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0601 11:35:37.336152       1 handler_proxy.go:104] no RequestInfo found in the context
	E0601 11:35:37.336235       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0601 11:35:37.336245       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0601 11:37:37.336379       1 handler_proxy.go:104] no RequestInfo found in the context
	E0601 11:37:37.336460       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0601 11:37:37.336468       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	* 
	* ==> kube-controller-manager [d21e78271b81ab20da16b5cd9e947f35b35db3023a93fc154c959b24cd029c28] <==
	* W0601 11:31:53.336871       1 garbagecollector.go:707] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0601 11:32:22.947315       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0601 11:32:23.351030       1 garbagecollector.go:707] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0601 11:32:52.961460       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0601 11:32:53.365860       1 garbagecollector.go:707] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0601 11:33:22.976343       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0601 11:33:23.380417       1 garbagecollector.go:707] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0601 11:33:52.983851       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0601 11:33:53.395614       1 garbagecollector.go:707] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0601 11:34:22.992491       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0601 11:34:23.411453       1 garbagecollector.go:707] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0601 11:34:53.006140       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0601 11:34:53.428130       1 garbagecollector.go:707] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0601 11:35:23.018863       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0601 11:35:23.441933       1 garbagecollector.go:707] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0601 11:35:53.030455       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0601 11:35:53.455586       1 garbagecollector.go:707] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0601 11:36:23.042036       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0601 11:36:23.470004       1 garbagecollector.go:707] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0601 11:36:53.052473       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0601 11:36:53.485220       1 garbagecollector.go:707] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0601 11:37:23.062026       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0601 11:37:23.499112       1 garbagecollector.go:707] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0601 11:37:53.071859       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0601 11:37:53.513505       1 garbagecollector.go:707] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	
	* 
	* ==> kube-proxy [2fb746cc75b1d529404d0b3097c5644a162207995ae1736ab99ed2a7508b8ae8] <==
	* I0601 11:24:53.979664       1 node.go:163] Successfully retrieved node IP: 192.168.49.2
	I0601 11:24:53.979726       1 server_others.go:138] "Detected node IP" address="192.168.49.2"
	I0601 11:24:53.979767       1 server_others.go:561] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0601 11:24:54.001129       1 server_others.go:206] "Using iptables Proxier"
	I0601 11:24:54.001171       1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0601 11:24:54.001182       1 server_others.go:214] "Creating dualStackProxier for iptables"
	I0601 11:24:54.001206       1 server_others.go:491] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
	I0601 11:24:54.001552       1 server.go:656] "Version info" version="v1.23.6"
	I0601 11:24:54.002098       1 config.go:226] "Starting endpoint slice config controller"
	I0601 11:24:54.002134       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0601 11:24:54.002209       1 config.go:317] "Starting service config controller"
	I0601 11:24:54.002223       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0601 11:24:54.102778       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	I0601 11:24:54.102782       1 shared_informer.go:247] Caches are synced for service config 
	
	* 
	* ==> kube-scheduler [dd66fe479b71f0dd37f716863c649f5efd7903cab492c2dfddeedc600bf510a0] <==
	* W0601 11:24:36.370197       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0601 11:24:36.370227       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0601 11:24:36.371109       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0601 11:24:36.371140       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0601 11:24:36.371150       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0601 11:24:36.371182       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0601 11:24:36.371361       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0601 11:24:36.371399       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0601 11:24:36.371503       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0601 11:24:36.371527       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0601 11:24:36.371525       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0601 11:24:36.371543       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0601 11:24:37.249365       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0601 11:24:37.249400       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0601 11:24:37.283809       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0601 11:24:37.283835       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0601 11:24:37.285837       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0601 11:24:37.285861       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0601 11:24:37.454156       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0601 11:24:37.454193       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0601 11:24:37.582046       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0601 11:24:37.582079       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0601 11:24:37.582723       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0601 11:24:37.582768       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0601 11:24:37.963656       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Wed 2022-06-01 11:19:53 UTC, end at Wed 2022-06-01 11:37:57 UTC. --
	Jun 01 11:36:44 default-k8s-different-port-20220601110654-6708 kubelet[3056]: E0601 11:36:44.538681    3056 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jun 01 11:36:49 default-k8s-different-port-20220601110654-6708 kubelet[3056]: E0601 11:36:49.539851    3056 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jun 01 11:36:54 default-k8s-different-port-20220601110654-6708 kubelet[3056]: E0601 11:36:54.541342    3056 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jun 01 11:36:59 default-k8s-different-port-20220601110654-6708 kubelet[3056]: I0601 11:36:59.178070    3056 scope.go:110] "RemoveContainer" containerID="c6a18538a13a4b879f6715e47322383910efd40776c8b8680e8d2b9b9189ccc0"
	Jun 01 11:36:59 default-k8s-different-port-20220601110654-6708 kubelet[3056]: E0601 11:36:59.542805    3056 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jun 01 11:37:04 default-k8s-different-port-20220601110654-6708 kubelet[3056]: E0601 11:37:04.543366    3056 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jun 01 11:37:09 default-k8s-different-port-20220601110654-6708 kubelet[3056]: I0601 11:37:09.543568    3056 scope.go:110] "RemoveContainer" containerID="c6a18538a13a4b879f6715e47322383910efd40776c8b8680e8d2b9b9189ccc0"
	Jun 01 11:37:09 default-k8s-different-port-20220601110654-6708 kubelet[3056]: I0601 11:37:09.543951    3056 scope.go:110] "RemoveContainer" containerID="bf885aba43938d2866c4c409ae2e2855b8ad06978936ae07c278c45a34746ce4"
	Jun 01 11:37:09 default-k8s-different-port-20220601110654-6708 kubelet[3056]: E0601 11:37:09.544226    3056 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kindnet-cni pod=kindnet-bzkn8_kube-system(4b32f531-7d27-4ce4-900c-f7566d5281ca)\"" pod="kube-system/kindnet-bzkn8" podUID=4b32f531-7d27-4ce4-900c-f7566d5281ca
	Jun 01 11:37:09 default-k8s-different-port-20220601110654-6708 kubelet[3056]: E0601 11:37:09.544275    3056 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jun 01 11:37:14 default-k8s-different-port-20220601110654-6708 kubelet[3056]: E0601 11:37:14.545320    3056 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jun 01 11:37:19 default-k8s-different-port-20220601110654-6708 kubelet[3056]: E0601 11:37:19.546999    3056 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jun 01 11:37:23 default-k8s-different-port-20220601110654-6708 kubelet[3056]: I0601 11:37:23.178768    3056 scope.go:110] "RemoveContainer" containerID="bf885aba43938d2866c4c409ae2e2855b8ad06978936ae07c278c45a34746ce4"
	Jun 01 11:37:23 default-k8s-different-port-20220601110654-6708 kubelet[3056]: E0601 11:37:23.179044    3056 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kindnet-cni pod=kindnet-bzkn8_kube-system(4b32f531-7d27-4ce4-900c-f7566d5281ca)\"" pod="kube-system/kindnet-bzkn8" podUID=4b32f531-7d27-4ce4-900c-f7566d5281ca
	Jun 01 11:37:24 default-k8s-different-port-20220601110654-6708 kubelet[3056]: E0601 11:37:24.548537    3056 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jun 01 11:37:29 default-k8s-different-port-20220601110654-6708 kubelet[3056]: E0601 11:37:29.550189    3056 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jun 01 11:37:34 default-k8s-different-port-20220601110654-6708 kubelet[3056]: E0601 11:37:34.551611    3056 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jun 01 11:37:35 default-k8s-different-port-20220601110654-6708 kubelet[3056]: I0601 11:37:35.178517    3056 scope.go:110] "RemoveContainer" containerID="bf885aba43938d2866c4c409ae2e2855b8ad06978936ae07c278c45a34746ce4"
	Jun 01 11:37:35 default-k8s-different-port-20220601110654-6708 kubelet[3056]: E0601 11:37:35.178774    3056 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kindnet-cni pod=kindnet-bzkn8_kube-system(4b32f531-7d27-4ce4-900c-f7566d5281ca)\"" pod="kube-system/kindnet-bzkn8" podUID=4b32f531-7d27-4ce4-900c-f7566d5281ca
	Jun 01 11:37:39 default-k8s-different-port-20220601110654-6708 kubelet[3056]: E0601 11:37:39.552527    3056 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jun 01 11:37:44 default-k8s-different-port-20220601110654-6708 kubelet[3056]: E0601 11:37:44.553301    3056 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jun 01 11:37:49 default-k8s-different-port-20220601110654-6708 kubelet[3056]: I0601 11:37:49.178678    3056 scope.go:110] "RemoveContainer" containerID="bf885aba43938d2866c4c409ae2e2855b8ad06978936ae07c278c45a34746ce4"
	Jun 01 11:37:49 default-k8s-different-port-20220601110654-6708 kubelet[3056]: E0601 11:37:49.178948    3056 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kindnet-cni pod=kindnet-bzkn8_kube-system(4b32f531-7d27-4ce4-900c-f7566d5281ca)\"" pod="kube-system/kindnet-bzkn8" podUID=4b32f531-7d27-4ce4-900c-f7566d5281ca
	Jun 01 11:37:49 default-k8s-different-port-20220601110654-6708 kubelet[3056]: E0601 11:37:49.554175    3056 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jun 01 11:37:54 default-k8s-different-port-20220601110654-6708 kubelet[3056]: E0601 11:37:54.555552    3056 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-different-port-20220601110654-6708 -n default-k8s-different-port-20220601110654-6708
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-different-port-20220601110654-6708 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:270: non-running pods: coredns-64897985d-xtfld metrics-server-b955d9d8-qgk2q storage-provisioner dashboard-metrics-scraper-56974995fc-p9hc5 kubernetes-dashboard-8469778f77-k8wsb
helpers_test.go:272: ======> post-mortem[TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:275: (dbg) Run:  kubectl --context default-k8s-different-port-20220601110654-6708 describe pod coredns-64897985d-xtfld metrics-server-b955d9d8-qgk2q storage-provisioner dashboard-metrics-scraper-56974995fc-p9hc5 kubernetes-dashboard-8469778f77-k8wsb
helpers_test.go:275: (dbg) Non-zero exit: kubectl --context default-k8s-different-port-20220601110654-6708 describe pod coredns-64897985d-xtfld metrics-server-b955d9d8-qgk2q storage-provisioner dashboard-metrics-scraper-56974995fc-p9hc5 kubernetes-dashboard-8469778f77-k8wsb: exit status 1 (52.937773ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-64897985d-xtfld" not found
	Error from server (NotFound): pods "metrics-server-b955d9d8-qgk2q" not found
	Error from server (NotFound): pods "storage-provisioner" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-56974995fc-p9hc5" not found
	Error from server (NotFound): pods "kubernetes-dashboard-8469778f77-k8wsb" not found

                                                
                                                
** /stderr **
helpers_test.go:277: kubectl --context default-k8s-different-port-20220601110654-6708 describe pod coredns-64897985d-xtfld metrics-server-b955d9d8-qgk2q storage-provisioner dashboard-metrics-scraper-56974995fc-p9hc5 kubernetes-dashboard-8469778f77-k8wsb: exit status 1
--- FAIL: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop (542.34s)

                                                
                                    

Test pass (230/267)

Order passed test Duration
3 TestDownloadOnly/v1.16.0/json-events 14.62
4 TestDownloadOnly/v1.16.0/preload-exists 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.08
10 TestDownloadOnly/v1.23.6/json-events 5.14
11 TestDownloadOnly/v1.23.6/preload-exists 0
15 TestDownloadOnly/v1.23.6/LogsDuration 0.08
16 TestDownloadOnly/DeleteAll 0.31
17 TestDownloadOnly/DeleteAlwaysSucceeds 0.2
18 TestDownloadOnlyKic 3.98
19 TestBinaryMirror 0.86
20 TestOffline 287.15
22 TestAddons/Setup 108.42
24 TestAddons/parallel/Registry 22.45
25 TestAddons/parallel/Ingress 27.65
26 TestAddons/parallel/MetricsServer 5.57
27 TestAddons/parallel/HelmTiller 13.39
29 TestAddons/parallel/CSI 42.66
31 TestAddons/serial/GCPAuth 40.78
32 TestAddons/StoppedEnableDisable 20.26
33 TestCertOptions 35.16
34 TestCertExpiration 243.06
36 TestForceSystemdFlag 45.53
37 TestForceSystemdEnv 266.85
38 TestKVMDriverInstallOrUpdate 4.29
42 TestErrorSpam/setup 27.67
43 TestErrorSpam/start 0.95
44 TestErrorSpam/status 1.11
45 TestErrorSpam/pause 2.13
46 TestErrorSpam/unpause 1.52
47 TestErrorSpam/stop 14.87
50 TestFunctional/serial/CopySyncFile 0
51 TestFunctional/serial/StartWithProxy 75.78
52 TestFunctional/serial/AuditLog 0
53 TestFunctional/serial/SoftStart 15.35
54 TestFunctional/serial/KubeContext 0.04
55 TestFunctional/serial/KubectlGetPods 0.17
58 TestFunctional/serial/CacheCmd/cache/add_remote 3.4
59 TestFunctional/serial/CacheCmd/cache/add_local 2.13
60 TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 0.06
61 TestFunctional/serial/CacheCmd/cache/list 0.06
62 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.36
63 TestFunctional/serial/CacheCmd/cache/cache_reload 1.94
64 TestFunctional/serial/CacheCmd/cache/delete 0.13
65 TestFunctional/serial/MinikubeKubectlCmd 0.11
66 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.11
67 TestFunctional/serial/ExtraConfig 43.11
68 TestFunctional/serial/ComponentHealth 0.06
69 TestFunctional/serial/LogsCmd 1.08
70 TestFunctional/serial/LogsFileCmd 1.11
72 TestFunctional/parallel/ConfigCmd 0.46
73 TestFunctional/parallel/DashboardCmd 8.49
74 TestFunctional/parallel/DryRun 0.52
75 TestFunctional/parallel/InternationalLanguage 0.22
76 TestFunctional/parallel/StatusCmd 1.14
79 TestFunctional/parallel/ServiceCmd 11.43
80 TestFunctional/parallel/ServiceCmdConnect 19.82
81 TestFunctional/parallel/AddonsCmd 0.39
82 TestFunctional/parallel/PersistentVolumeClaim 33.75
84 TestFunctional/parallel/SSHCmd 0.82
85 TestFunctional/parallel/CpCmd 1.4
86 TestFunctional/parallel/MySQL 24.5
87 TestFunctional/parallel/FileSync 0.44
88 TestFunctional/parallel/CertSync 2.61
92 TestFunctional/parallel/NodeLabels 0.07
94 TestFunctional/parallel/NonActiveRuntimeDisabled 0.82
96 TestFunctional/parallel/Version/short 0.07
97 TestFunctional/parallel/Version/components 0.69
98 TestFunctional/parallel/UpdateContextCmd/no_changes 0.25
99 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.28
100 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.28
101 TestFunctional/parallel/ImageCommands/ImageListShort 0.3
102 TestFunctional/parallel/ImageCommands/ImageListTable 0.31
103 TestFunctional/parallel/ImageCommands/ImageListJson 0.29
104 TestFunctional/parallel/ImageCommands/ImageListYaml 0.32
105 TestFunctional/parallel/ImageCommands/ImageBuild 4.5
106 TestFunctional/parallel/ImageCommands/Setup 1.58
107 TestFunctional/parallel/ProfileCmd/profile_not_create 0.56
109 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
111 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 10.2
112 TestFunctional/parallel/ProfileCmd/profile_list 0.53
113 TestFunctional/parallel/ProfileCmd/profile_json_output 0.73
114 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 4.29
115 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 6.6
116 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.1
117 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
121 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
122 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 6.68
123 TestFunctional/parallel/ImageCommands/ImageSaveToFile 1.57
124 TestFunctional/parallel/ImageCommands/ImageRemove 0.54
125 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 2.19
126 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 1.22
127 TestFunctional/parallel/MountCmd/any-port 8.29
128 TestFunctional/parallel/MountCmd/specific-port 2.24
129 TestFunctional/delete_addon-resizer_images 0.1
130 TestFunctional/delete_my-image_image 0.03
131 TestFunctional/delete_minikube_cached_images 0.03
134 TestIngressAddonLegacy/StartLegacyK8sCluster 74.74
136 TestIngressAddonLegacy/serial/ValidateIngressAddonActivation 9.75
137 TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation 0.4
138 TestIngressAddonLegacy/serial/ValidateIngressAddons 31.51
141 TestJSONOutput/start/Command 46.76
142 TestJSONOutput/start/Audit 0
144 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
145 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
147 TestJSONOutput/pause/Command 0.67
148 TestJSONOutput/pause/Audit 0
150 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
151 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
153 TestJSONOutput/unpause/Command 0.64
154 TestJSONOutput/unpause/Audit 0
156 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
157 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
159 TestJSONOutput/stop/Command 15.71
160 TestJSONOutput/stop/Audit 0
162 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
163 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
164 TestErrorJSONOutput 0.3
166 TestKicCustomNetwork/create_custom_network 32.56
167 TestKicCustomNetwork/use_default_bridge_network 25.88
168 TestKicExistingNetwork 27.46
169 TestKicCustomSubnet 26.59
170 TestMainNoArgs 0.06
171 TestMinikubeProfile 69.93
174 TestMountStart/serial/StartWithMountFirst 4.89
175 TestMountStart/serial/VerifyMountFirst 0.33
176 TestMountStart/serial/StartWithMountSecond 4.85
177 TestMountStart/serial/VerifyMountSecond 0.34
178 TestMountStart/serial/DeleteFirst 1.82
179 TestMountStart/serial/VerifyMountPostDelete 0.34
180 TestMountStart/serial/Stop 1.25
181 TestMountStart/serial/RestartStopped 6.55
182 TestMountStart/serial/VerifyMountPostStop 0.32
185 TestMultiNode/serial/FreshStart2Nodes 75.41
186 TestMultiNode/serial/DeployApp2Nodes 4.54
187 TestMultiNode/serial/PingHostFrom2Pods 0.82
188 TestMultiNode/serial/AddNode 30.97
189 TestMultiNode/serial/ProfileList 0.36
190 TestMultiNode/serial/CopyFile 11.72
191 TestMultiNode/serial/StopNode 2.44
192 TestMultiNode/serial/StartAfterStop 36.27
193 TestMultiNode/serial/RestartKeepsNodes 186.56
194 TestMultiNode/serial/DeleteNode 5.18
195 TestMultiNode/serial/StopMultiNode 40.29
196 TestMultiNode/serial/RestartMultiNode 90.85
197 TestMultiNode/serial/ValidateNameConflict 32.31
202 TestPreload 190.36
204 TestScheduledStopUnix 108.7
207 TestInsufficientStorage 16.81
208 TestRunningBinaryUpgrade 284.99
210 TestKubernetesUpgrade 128.78
211 TestMissingContainerUpgrade 119.58
216 TestNoKubernetes/serial/StartNoK8sWithVersion 0.11
217 TestNoKubernetes/serial/StartWithK8s 255.44
222 TestNetworkPlugins/group/false 1.26
226 TestStoppedBinaryUpgrade/Setup 0.49
227 TestStoppedBinaryUpgrade/Upgrade 104
228 TestNoKubernetes/serial/StartWithStopK8s 19.5
229 TestNoKubernetes/serial/Start 4.9
230 TestNoKubernetes/serial/VerifyK8sNotRunning 0.38
231 TestNoKubernetes/serial/ProfileList 1.56
232 TestNoKubernetes/serial/Stop 2.97
233 TestNoKubernetes/serial/StartNoArgs 5.68
242 TestPause/serial/Start 48.03
243 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.38
244 TestPause/serial/SecondStartNoReconfiguration 16.04
245 TestPause/serial/Pause 0.71
246 TestPause/serial/VerifyStatus 0.41
247 TestPause/serial/Unpause 0.64
248 TestPause/serial/PauseAgain 5.35
249 TestStoppedBinaryUpgrade/MinikubeLogs 0.82
250 TestPause/serial/DeletePaused 7.55
251 TestPause/serial/VerifyDeletedResources 0.6
253 TestNetworkPlugins/group/kindnet/Start 60.15
254 TestNetworkPlugins/group/kindnet/ControllerPod 5.01
255 TestNetworkPlugins/group/kindnet/KubeletFlags 0.4
256 TestNetworkPlugins/group/kindnet/NetCatPod 10.19
257 TestNetworkPlugins/group/kindnet/DNS 0.15
258 TestNetworkPlugins/group/kindnet/Localhost 0.12
259 TestNetworkPlugins/group/kindnet/HairPin 0.13
260 TestNetworkPlugins/group/enable-default-cni/Start 70.28
261 TestNetworkPlugins/group/bridge/Start 52.56
262 TestNetworkPlugins/group/calico/Start 79.85
263 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.61
264 TestNetworkPlugins/group/enable-default-cni/NetCatPod 13.82
265 TestNetworkPlugins/group/enable-default-cni/DNS 0.13
266 TestNetworkPlugins/group/enable-default-cni/Localhost 0.11
267 TestNetworkPlugins/group/enable-default-cni/HairPin 0.12
268 TestNetworkPlugins/group/cilium/Start 71.32
269 TestNetworkPlugins/group/bridge/KubeletFlags 0.45
270 TestNetworkPlugins/group/bridge/NetCatPod 9.33
271 TestNetworkPlugins/group/bridge/DNS 0.17
272 TestNetworkPlugins/group/bridge/Localhost 0.15
273 TestNetworkPlugins/group/bridge/HairPin 0.15
276 TestNetworkPlugins/group/calico/ControllerPod 5.02
277 TestNetworkPlugins/group/calico/KubeletFlags 0.37
279 TestNetworkPlugins/group/cilium/ControllerPod 5.02
280 TestNetworkPlugins/group/cilium/KubeletFlags 0.35
281 TestNetworkPlugins/group/cilium/NetCatPod 8.8
282 TestNetworkPlugins/group/cilium/DNS 0.12
283 TestNetworkPlugins/group/cilium/Localhost 0.12
284 TestNetworkPlugins/group/cilium/HairPin 0.17
286 TestStartStop/group/no-preload/serial/FirstStart 60.67
287 TestStartStop/group/no-preload/serial/DeployApp 9.3
288 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.57
289 TestStartStop/group/no-preload/serial/Stop 20.14
290 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.19
291 TestStartStop/group/no-preload/serial/SecondStart 323.67
295 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 8.01
296 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.06
297 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.39
298 TestStartStop/group/no-preload/serial/Pause 3.08
303 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.6
304 TestStartStop/group/old-k8s-version/serial/Stop 1.3
305 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.2
308 TestStartStop/group/newest-cni/serial/FirstStart 42.57
309 TestStartStop/group/newest-cni/serial/DeployApp 0
310 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.48
311 TestStartStop/group/newest-cni/serial/Stop 20.09
312 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.2
313 TestStartStop/group/newest-cni/serial/SecondStart 34.17
314 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
315 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
316 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.4
317 TestStartStop/group/newest-cni/serial/Pause 2.81
318 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.56
319 TestStartStop/group/embed-certs/serial/Stop 11.21
320 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.19
322 TestStartStop/group/default-k8s-different-port/serial/EnableAddonWhileActive 0.57
323 TestStartStop/group/default-k8s-different-port/serial/Stop 10.34
324 TestStartStop/group/default-k8s-different-port/serial/EnableAddonAfterStop 0.19
x
+
TestDownloadOnly/v1.16.0/json-events (14.62s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:71: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-20220601101959-6708 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:71: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-20220601101959-6708 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (14.617715262s)
--- PASS: TestDownloadOnly/v1.16.0/json-events (14.62s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
--- PASS: TestDownloadOnly/v1.16.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:173: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-20220601101959-6708
aaa_download_only_test.go:173: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-20220601101959-6708: exit status 85 (76.666016ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------|---------|------|---------|------------|----------|
	| Command | Args | Profile | User | Version | Start Time | End Time |
	|---------|------|---------|------|---------|------------|----------|
	|---------|------|---------|------|---------|------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/06/01 10:19:59
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.18.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0601 10:19:59.110799    6720 out.go:296] Setting OutFile to fd 1 ...
	I0601 10:19:59.110899    6720 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0601 10:19:59.110909    6720 out.go:309] Setting ErrFile to fd 2...
	I0601 10:19:59.110914    6720 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0601 10:19:59.111020    6720 root.go:322] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/bin
	W0601 10:19:59.111144    6720 root.go:300] Error reading config file at /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/config/config.json: open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/config/config.json: no such file or directory
	I0601 10:19:59.111357    6720 out.go:303] Setting JSON to true
	I0601 10:19:59.112580    6720 start.go:115] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":153,"bootTime":1654078646,"procs":183,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.13.0-1027-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0601 10:19:59.112710    6720 start.go:125] virtualization: kvm guest
	I0601 10:19:59.115936    6720 out.go:97] [download-only-20220601101959-6708] minikube v1.26.0-beta.1 on Ubuntu 20.04 (kvm/amd64)
	I0601 10:19:59.117524    6720 out.go:169] MINIKUBE_LOCATION=14079
	W0601 10:19:59.116086    6720 preload.go:295] Failed to list preload files: open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/preloaded-tarball: no such file or directory
	I0601 10:19:59.116088    6720 notify.go:193] Checking for updates...
	I0601 10:19:59.120315    6720 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0601 10:19:59.121752    6720 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig
	I0601 10:19:59.123116    6720 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube
	I0601 10:19:59.124299    6720 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0601 10:19:59.126673    6720 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0601 10:19:59.126812    6720 driver.go:358] Setting default libvirt URI to qemu:///system
	I0601 10:19:59.160160    6720 docker.go:137] docker version: linux-20.10.16
	I0601 10:19:59.160242    6720 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0601 10:19:59.860768    6720 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:75 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:25 OomKillDisable:true NGoroutines:33 SystemTime:2022-06-01 10:19:59.184786024 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1027-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662795776 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:20.10.16 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0601 10:19:59.860887    6720 docker.go:254] overlay module found
	I0601 10:19:59.862817    6720 out.go:97] Using the docker driver based on user configuration
	I0601 10:19:59.862834    6720 start.go:284] selected driver: docker
	I0601 10:19:59.862839    6720 start.go:806] validating driver "docker" against <nil>
	I0601 10:19:59.862993    6720 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0601 10:19:59.962190    6720 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:75 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:25 OomKillDisable:true NGoroutines:33 SystemTime:2022-06-01 10:19:59.887611063 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1027-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662795776 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:20.10.16 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0601 10:19:59.962303    6720 start_flags.go:292] no existing cluster config was found, will generate one from the flags 
	I0601 10:19:59.962753    6720 start_flags.go:373] Using suggested 8000MB memory alloc based on sys=32103MB, container=32103MB
	I0601 10:19:59.962861    6720 start_flags.go:829] Wait components to verify : map[apiserver:true system_pods:true]
	I0601 10:19:59.964946    6720 out.go:169] Using Docker driver with the root privilege
	I0601 10:19:59.966184    6720 cni.go:95] Creating CNI manager for ""
	I0601 10:19:59.966203    6720 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0601 10:19:59.966218    6720 cni.go:225] auto-setting extra-config to "kubelet.cni-conf-dir=/etc/cni/net.mk"
	I0601 10:19:59.966229    6720 cni.go:230] extra-config set to "kubelet.cni-conf-dir=/etc/cni/net.mk"
	I0601 10:19:59.966233    6720 start_flags.go:301] Found "CNI" CNI - setting NetworkPlugin=cni
	I0601 10:19:59.966246    6720 start_flags.go:306] config:
	{Name:download-only-20220601101959-6708 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a Memory:8000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-20220601101959-6708 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain
:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0601 10:19:59.967816    6720 out.go:97] Starting control plane node download-only-20220601101959-6708 in cluster download-only-20220601101959-6708
	I0601 10:19:59.967844    6720 cache.go:120] Beginning downloading kic base image for docker with containerd
	I0601 10:19:59.969279    6720 out.go:97] Pulling base image ...
	I0601 10:19:59.969306    6720 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime containerd
	I0601 10:19:59.969333    6720 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a in local docker daemon
	I0601 10:20:00.008863    6720 cache.go:146] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a to local cache
	I0601 10:20:00.009136    6720 image.go:59] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a in local cache directory
	I0601 10:20:00.009229    6720 image.go:119] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a to local cache
	I0601 10:20:00.079246    6720 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-amd64.tar.lz4
	I0601 10:20:00.079273    6720 cache.go:57] Caching tarball of preloaded images
	I0601 10:20:00.079435    6720 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime containerd
	I0601 10:20:00.081540    6720 out.go:97] Downloading Kubernetes v1.16.0 preload ...
	I0601 10:20:00.081556    6720 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-amd64.tar.lz4 ...
	I0601 10:20:00.192286    6720 download.go:101] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-amd64.tar.lz4?checksum=md5:d96a2b2afa188e17db7ddabb58d563fd -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-amd64.tar.lz4
	I0601 10:20:04.591002    6720 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-amd64.tar.lz4 ...
	I0601 10:20:04.591073    6720 preload.go:256] verifying checksumm of /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-amd64.tar.lz4 ...
	I0601 10:20:05.451188    6720 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on containerd
	I0601 10:20:05.451564    6720 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/download-only-20220601101959-6708/config.json ...
	I0601 10:20:05.451608    6720 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/download-only-20220601101959-6708/config.json: {Name:mk6040474b6b62a6fa572d35bcd784b9abdc8043 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0601 10:20:05.451815    6720 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime containerd
	I0601 10:20:05.452067    6720 download.go:101] Downloading: https://storage.googleapis.com/kubernetes-release/release/v1.16.0/bin/linux/amd64/kubectl?checksum=file:https://storage.googleapis.com/kubernetes-release/release/v1.16.0/bin/linux/amd64/kubectl.sha1 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/linux/amd64/v1.16.0/kubectl
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-20220601101959-6708"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:174: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.23.6/json-events (5.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.23.6/json-events
aaa_download_only_test.go:71: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-20220601101959-6708 --force --alsologtostderr --kubernetes-version=v1.23.6 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:71: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-20220601101959-6708 --force --alsologtostderr --kubernetes-version=v1.23.6 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (5.135970285s)
--- PASS: TestDownloadOnly/v1.23.6/json-events (5.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.23.6/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.23.6/preload-exists
--- PASS: TestDownloadOnly/v1.23.6/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.23.6/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.23.6/LogsDuration
aaa_download_only_test.go:173: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-20220601101959-6708
aaa_download_only_test.go:173: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-20220601101959-6708: exit status 85 (76.770332ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------|---------|------|---------|------------|----------|
	| Command | Args | Profile | User | Version | Start Time | End Time |
	|---------|------|---------|------|---------|------------|----------|
	|---------|------|---------|------|---------|------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/06/01 10:20:13
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.18.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0601 10:20:13.809038    6890 out.go:296] Setting OutFile to fd 1 ...
	I0601 10:20:13.809188    6890 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0601 10:20:13.809198    6890 out.go:309] Setting ErrFile to fd 2...
	I0601 10:20:13.809202    6890 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0601 10:20:13.809316    6890 root.go:322] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/bin
	W0601 10:20:13.809420    6890 root.go:300] Error reading config file at /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/config/config.json: open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/config/config.json: no such file or directory
	I0601 10:20:13.809522    6890 out.go:303] Setting JSON to true
	I0601 10:20:13.810260    6890 start.go:115] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":168,"bootTime":1654078646,"procs":183,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.13.0-1027-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0601 10:20:13.810316    6890 start.go:125] virtualization: kvm guest
	I0601 10:20:13.812717    6890 out.go:97] [download-only-20220601101959-6708] minikube v1.26.0-beta.1 on Ubuntu 20.04 (kvm/amd64)
	I0601 10:20:13.814460    6890 out.go:169] MINIKUBE_LOCATION=14079
	I0601 10:20:13.812886    6890 notify.go:193] Checking for updates...
	I0601 10:20:13.817364    6890 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0601 10:20:13.818951    6890 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig
	I0601 10:20:13.820391    6890 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube
	I0601 10:20:13.821937    6890 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0601 10:20:13.824661    6890 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0601 10:20:13.825210    6890 config.go:178] Loaded profile config "download-only-20220601101959-6708": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.16.0
	W0601 10:20:13.825269    6890 start.go:714] api.Load failed for download-only-20220601101959-6708: filestore "download-only-20220601101959-6708": Docker machine "download-only-20220601101959-6708" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0601 10:20:13.825333    6890 driver.go:358] Setting default libvirt URI to qemu:///system
	W0601 10:20:13.825374    6890 start.go:714] api.Load failed for download-only-20220601101959-6708: filestore "download-only-20220601101959-6708": Docker machine "download-only-20220601101959-6708" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0601 10:20:13.859875    6890 docker.go:137] docker version: linux-20.10.16
	I0601 10:20:13.859947    6890 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0601 10:20:13.951970    6890 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:75 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:25 OomKillDisable:true NGoroutines:33 SystemTime:2022-06-01 10:20:13.884379256 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1027-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662795776 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:20.10.16 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0601 10:20:13.952083    6890 docker.go:254] overlay module found
	I0601 10:20:13.953999    6890 out.go:97] Using the docker driver based on existing profile
	I0601 10:20:13.954016    6890 start.go:284] selected driver: docker
	I0601 10:20:13.954020    6890 start.go:806] validating driver "docker" against &{Name:download-only-20220601101959-6708 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a Memory:8000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-20220601101959-6708 Namesp
ace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false}
	I0601 10:20:13.954616    6890 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0601 10:20:14.049193    6890 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:75 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:25 OomKillDisable:true NGoroutines:33 SystemTime:2022-06-01 10:20:13.979557541 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1027-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662795776 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:20.10.16 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0601 10:20:14.049740    6890 cni.go:95] Creating CNI manager for ""
	I0601 10:20:14.049755    6890 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0601 10:20:14.049770    6890 start_flags.go:306] config:
	{Name:download-only-20220601101959-6708 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a Memory:8000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:download-only-20220601101959-6708 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain
:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0601 10:20:14.051803    6890 out.go:97] Starting control plane node download-only-20220601101959-6708 in cluster download-only-20220601101959-6708
	I0601 10:20:14.051822    6890 cache.go:120] Beginning downloading kic base image for docker with containerd
	I0601 10:20:14.053233    6890 out.go:97] Pulling base image ...
	I0601 10:20:14.053255    6890 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime containerd
	I0601 10:20:14.053300    6890 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a in local docker daemon
	I0601 10:20:14.091698    6890 cache.go:146] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a to local cache
	I0601 10:20:14.091953    6890 image.go:59] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a in local cache directory
	I0601 10:20:14.091971    6890 image.go:62] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a in local cache directory, skipping pull
	I0601 10:20:14.091975    6890 image.go:103] gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a exists in cache, skipping pull
	I0601 10:20:14.091989    6890 cache.go:149] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a as a tarball
	I0601 10:20:14.157146    6890 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.23.6/preloaded-images-k8s-v18-v1.23.6-containerd-overlay2-amd64.tar.lz4
	I0601 10:20:14.157172    6890 cache.go:57] Caching tarball of preloaded images
	I0601 10:20:14.157354    6890 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime containerd
	I0601 10:20:14.159820    6890 out.go:97] Downloading Kubernetes v1.23.6 preload ...
	I0601 10:20:14.159836    6890 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.23.6-containerd-overlay2-amd64.tar.lz4 ...
	I0601 10:20:14.267009    6890 download.go:101] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.23.6/preloaded-images-k8s-v18-v1.23.6-containerd-overlay2-amd64.tar.lz4?checksum=md5:af5c6eac9f26fa4c647c193efff8a3b0 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.6-containerd-overlay2-amd64.tar.lz4
	I0601 10:20:17.228325    6890 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.23.6-containerd-overlay2-amd64.tar.lz4 ...
	I0601 10:20:17.228413    6890 preload.go:256] verifying checksumm of /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.6-containerd-overlay2-amd64.tar.lz4 ...
	I0601 10:20:18.165472    6890 cache.go:60] Finished verifying existence of preloaded tar for  v1.23.6 on containerd
	I0601 10:20:18.165603    6890 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/download-only-20220601101959-6708/config.json ...
	I0601 10:20:18.165810    6890 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime containerd
	I0601 10:20:18.166014    6890 download.go:101] Downloading: https://storage.googleapis.com/kubernetes-release/release/v1.23.6/bin/linux/amd64/kubectl?checksum=file:https://storage.googleapis.com/kubernetes-release/release/v1.23.6/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/linux/amd64/v1.23.6/kubectl
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-20220601101959-6708"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:174: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.23.6/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAll (0.31s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAll
aaa_download_only_test.go:191: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/DeleteAll (0.31s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAlwaysSucceeds (0.2s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAlwaysSucceeds
aaa_download_only_test.go:203: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-20220601101959-6708
--- PASS: TestDownloadOnly/DeleteAlwaysSucceeds (0.20s)

                                                
                                    
x
+
TestDownloadOnlyKic (3.98s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:228: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p download-docker-20220601102019-6708 --force --alsologtostderr --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:228: (dbg) Done: out/minikube-linux-amd64 start --download-only -p download-docker-20220601102019-6708 --force --alsologtostderr --driver=docker  --container-runtime=containerd: (2.849034344s)
helpers_test.go:175: Cleaning up "download-docker-20220601102019-6708" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p download-docker-20220601102019-6708
--- PASS: TestDownloadOnlyKic (3.98s)

                                                
                                    
x
+
TestBinaryMirror (0.86s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:310: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-20220601102023-6708 --alsologtostderr --binary-mirror http://127.0.0.1:43237 --driver=docker  --container-runtime=containerd
helpers_test.go:175: Cleaning up "binary-mirror-20220601102023-6708" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-20220601102023-6708
--- PASS: TestBinaryMirror (0.86s)

                                                
                                    
x
+
TestOffline (287.15s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-containerd-20220601104837-6708 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-containerd-20220601104837-6708 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=containerd: (4m44.021340852s)
helpers_test.go:175: Cleaning up "offline-containerd-20220601104837-6708" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-containerd-20220601104837-6708

                                                
                                                
=== CONT  TestOffline
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-containerd-20220601104837-6708: (3.124267825s)
--- PASS: TestOffline (287.15s)

                                                
                                    
x
+
TestAddons/Setup (108.42s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:75: (dbg) Run:  out/minikube-linux-amd64 start -p addons-20220601102024-6708 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:75: (dbg) Done: out/minikube-linux-amd64 start -p addons-20220601102024-6708 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=helm-tiller: (1m48.419510948s)
--- PASS: TestAddons/Setup (108.42s)

                                                
                                    
x
+
TestAddons/parallel/Registry (22.45s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:280: registry stabilized in 9.867576ms

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:282: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...

                                                
                                                
=== CONT  TestAddons/parallel/Registry
helpers_test.go:342: "registry-z54z4" [41e882ab-865e-41b7-839d-b2c7290960c0] Running

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:282: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.008712822s

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:285: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:342: "registry-proxy-qbkgl" [79eaea61-a731-4226-a7e1-1531f2b3db5e] Running

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:285: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.008003241s
addons_test.go:290: (dbg) Run:  kubectl --context addons-20220601102024-6708 delete po -l run=registry-test --now
addons_test.go:295: (dbg) Run:  kubectl --context addons-20220601102024-6708 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:295: (dbg) Done: kubectl --context addons-20220601102024-6708 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (11.71141322s)
addons_test.go:309: (dbg) Run:  out/minikube-linux-amd64 -p addons-20220601102024-6708 ip
2022/06/01 10:22:34 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:338: (dbg) Run:  out/minikube-linux-amd64 -p addons-20220601102024-6708 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (22.45s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (27.65s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:162: (dbg) Run:  kubectl --context addons-20220601102024-6708 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:182: (dbg) Run:  kubectl --context addons-20220601102024-6708 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:182: (dbg) Done: kubectl --context addons-20220601102024-6708 replace --force -f testdata/nginx-ingress-v1.yaml: (1.222364149s)
addons_test.go:195: (dbg) Run:  kubectl --context addons-20220601102024-6708 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:200: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:342: "nginx" [3ebd6aa9-902c-43c3-918d-8cef05882e32] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
helpers_test.go:342: "nginx" [3ebd6aa9-902c-43c3-918d-8cef05882e32] Running

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:200: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 15.314283945s
addons_test.go:212: (dbg) Run:  out/minikube-linux-amd64 -p addons-20220601102024-6708 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:236: (dbg) Run:  kubectl --context addons-20220601102024-6708 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:241: (dbg) Run:  out/minikube-linux-amd64 -p addons-20220601102024-6708 ip
addons_test.go:247: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p addons-20220601102024-6708 addons disable ingress-dns --alsologtostderr -v=1

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p addons-20220601102024-6708 addons disable ingress-dns --alsologtostderr -v=1: (1.527272151s)
addons_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p addons-20220601102024-6708 addons disable ingress --alsologtostderr -v=1

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:261: (dbg) Done: out/minikube-linux-amd64 -p addons-20220601102024-6708 addons disable ingress --alsologtostderr -v=1: (7.47122921s)
--- PASS: TestAddons/parallel/Ingress (27.65s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.57s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:357: metrics-server stabilized in 9.548178ms

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:359: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
helpers_test.go:342: "metrics-server-bd6f4dd56-9r2p7" [ff450eb4-428a-48f1-b51e-e30813788298] Running

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:359: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.008752468s
addons_test.go:365: (dbg) Run:  kubectl --context addons-20220601102024-6708 top pods -n kube-system

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p addons-20220601102024-6708 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.57s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (13.39s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:406: tiller-deploy stabilized in 9.647976ms

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:408: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
helpers_test.go:342: "tiller-deploy-6d67d5465d-8tjbr" [4e979588-0de9-4045-a2b4-6997a531f94c] Running

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:408: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.008138035s
addons_test.go:423: (dbg) Run:  kubectl --context addons-20220601102024-6708 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system -- version

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:423: (dbg) Done: kubectl --context addons-20220601102024-6708 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system -- version: (7.809184931s)
addons_test.go:440: (dbg) Run:  out/minikube-linux-amd64 -p addons-20220601102024-6708 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (13.39s)

                                                
                                    
x
+
TestAddons/parallel/CSI (42.66s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:511: csi-hostpath-driver pods stabilized in 11.482297ms
addons_test.go:514: (dbg) Run:  kubectl --context addons-20220601102024-6708 create -f testdata/csi-hostpath-driver/pvc.yaml

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:519: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:392: (dbg) Run:  kubectl --context addons-20220601102024-6708 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:524: (dbg) Run:  kubectl --context addons-20220601102024-6708 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:529: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:342: "task-pv-pod" [697b93b1-0808-43a2-a445-eb5e71f9490c] Pending
helpers_test.go:342: "task-pv-pod" [697b93b1-0808-43a2-a445-eb5e71f9490c] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])

                                                
                                                
=== CONT  TestAddons/parallel/CSI
helpers_test.go:342: "task-pv-pod" [697b93b1-0808-43a2-a445-eb5e71f9490c] Running

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:529: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 20.005804825s
addons_test.go:534: (dbg) Run:  kubectl --context addons-20220601102024-6708 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:539: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:417: (dbg) Run:  kubectl --context addons-20220601102024-6708 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default

                                                
                                                
=== CONT  TestAddons/parallel/CSI
helpers_test.go:417: (dbg) Run:  kubectl --context addons-20220601102024-6708 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:544: (dbg) Run:  kubectl --context addons-20220601102024-6708 delete pod task-pv-pod
addons_test.go:550: (dbg) Run:  kubectl --context addons-20220601102024-6708 delete pvc hpvc
addons_test.go:556: (dbg) Run:  kubectl --context addons-20220601102024-6708 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:561: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:392: (dbg) Run:  kubectl --context addons-20220601102024-6708 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:566: (dbg) Run:  kubectl --context addons-20220601102024-6708 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:571: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:342: "task-pv-pod-restore" [712de1c1-d855-41d8-9942-91676d56ed51] Pending

                                                
                                                
=== CONT  TestAddons/parallel/CSI
helpers_test.go:342: "task-pv-pod-restore" [712de1c1-d855-41d8-9942-91676d56ed51] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])

                                                
                                                
=== CONT  TestAddons/parallel/CSI
helpers_test.go:342: "task-pv-pod-restore" [712de1c1-d855-41d8-9942-91676d56ed51] Running

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:571: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 11.00653671s
addons_test.go:576: (dbg) Run:  kubectl --context addons-20220601102024-6708 delete pod task-pv-pod-restore
addons_test.go:580: (dbg) Run:  kubectl --context addons-20220601102024-6708 delete pvc hpvc-restore
addons_test.go:584: (dbg) Run:  kubectl --context addons-20220601102024-6708 delete volumesnapshot new-snapshot-demo
addons_test.go:588: (dbg) Run:  out/minikube-linux-amd64 -p addons-20220601102024-6708 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:588: (dbg) Done: out/minikube-linux-amd64 -p addons-20220601102024-6708 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.852550654s)
addons_test.go:592: (dbg) Run:  out/minikube-linux-amd64 -p addons-20220601102024-6708 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (42.66s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth (40.78s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth
addons_test.go:603: (dbg) Run:  kubectl --context addons-20220601102024-6708 create -f testdata/busybox.yaml
addons_test.go:609: (dbg) TestAddons/serial/GCPAuth: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:342: "busybox" [acf02b1b-8ba7-4708-bd11-92ed702e6fea] Pending
helpers_test.go:342: "busybox" [acf02b1b-8ba7-4708-bd11-92ed702e6fea] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:342: "busybox" [acf02b1b-8ba7-4708-bd11-92ed702e6fea] Running
addons_test.go:609: (dbg) TestAddons/serial/GCPAuth: integration-test=busybox healthy within 10.00541666s
addons_test.go:615: (dbg) Run:  kubectl --context addons-20220601102024-6708 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:652: (dbg) Run:  kubectl --context addons-20220601102024-6708 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
addons_test.go:665: (dbg) Run:  out/minikube-linux-amd64 -p addons-20220601102024-6708 addons disable gcp-auth --alsologtostderr -v=1
addons_test.go:665: (dbg) Done: out/minikube-linux-amd64 -p addons-20220601102024-6708 addons disable gcp-auth --alsologtostderr -v=1: (5.710030058s)
addons_test.go:681: (dbg) Run:  out/minikube-linux-amd64 -p addons-20220601102024-6708 addons enable gcp-auth
addons_test.go:687: (dbg) Run:  kubectl --context addons-20220601102024-6708 apply -f testdata/private-image.yaml
addons_test.go:694: (dbg) TestAddons/serial/GCPAuth: waiting 8m0s for pods matching "integration-test=private-image" in namespace "default" ...
helpers_test.go:342: "private-image-7f8587d5b7-488vw" [d40657e4-41f1-4e5b-83c4-e24a92715784] Pending / Ready:ContainersNotReady (containers with unready status: [private-image]) / ContainersReady:ContainersNotReady (containers with unready status: [private-image])
helpers_test.go:342: "private-image-7f8587d5b7-488vw" [d40657e4-41f1-4e5b-83c4-e24a92715784] Running
addons_test.go:694: (dbg) TestAddons/serial/GCPAuth: integration-test=private-image healthy within 15.005329337s
addons_test.go:700: (dbg) Run:  kubectl --context addons-20220601102024-6708 apply -f testdata/private-image-eu.yaml
addons_test.go:705: (dbg) TestAddons/serial/GCPAuth: waiting 8m0s for pods matching "integration-test=private-image-eu" in namespace "default" ...
helpers_test.go:342: "private-image-eu-869dcfd8c7-q2qw6" [f4686e2c-f3f6-466c-b2c7-3c6154473a8d] Pending / Ready:ContainersNotReady (containers with unready status: [private-image-eu]) / ContainersReady:ContainersNotReady (containers with unready status: [private-image-eu])
helpers_test.go:342: "private-image-eu-869dcfd8c7-q2qw6" [f4686e2c-f3f6-466c-b2c7-3c6154473a8d] Running
addons_test.go:705: (dbg) TestAddons/serial/GCPAuth: integration-test=private-image-eu healthy within 8.007202878s
--- PASS: TestAddons/serial/GCPAuth (40.78s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (20.26s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:132: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-20220601102024-6708
addons_test.go:132: (dbg) Done: out/minikube-linux-amd64 stop -p addons-20220601102024-6708: (20.066796561s)
addons_test.go:136: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-20220601102024-6708
addons_test.go:140: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-20220601102024-6708
--- PASS: TestAddons/StoppedEnableDisable (20.26s)

                                                
                                    
x
+
TestCertOptions (35.16s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-20220601105444-6708 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-20220601105444-6708 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd: (31.880307731s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-20220601105444-6708 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-20220601105444-6708 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-20220601105444-6708 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-20220601105444-6708" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-20220601105444-6708

                                                
                                                
=== CONT  TestCertOptions
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-20220601105444-6708: (2.537731849s)
--- PASS: TestCertOptions (35.16s)

                                                
                                    
x
+
TestCertExpiration (243.06s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-20220601105338-6708 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-20220601105338-6708 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=containerd: (43.39054234s)

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-20220601105338-6708 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=containerd
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-20220601105338-6708 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=containerd: (16.744275283s)
helpers_test.go:175: Cleaning up "cert-expiration-20220601105338-6708" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-20220601105338-6708
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-20220601105338-6708: (2.920151182s)
--- PASS: TestCertExpiration (243.06s)

                                                
                                    
x
+
TestForceSystemdFlag (45.53s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:85: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-20220601105435-6708 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:85: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-20220601105435-6708 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (42.39229859s)
docker_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-20220601105435-6708 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-flag-20220601105435-6708" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-20220601105435-6708

                                                
                                                
=== CONT  TestForceSystemdFlag
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-20220601105435-6708: (2.760169797s)
--- PASS: TestForceSystemdFlag (45.53s)

                                                
                                    
x
+
TestForceSystemdEnv (266.85s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:150: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-20220601104837-6708 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:150: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-20220601104837-6708 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (4m23.197973563s)
docker_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-env-20220601104837-6708 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-env-20220601104837-6708" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-20220601104837-6708
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-20220601104837-6708: (3.202559936s)
--- PASS: TestForceSystemdEnv (266.85s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (4.29s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
--- PASS: TestKVMDriverInstallOrUpdate (4.29s)

                                                
                                    
x
+
TestErrorSpam/setup (27.67s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:78: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-20220601102402-6708 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-20220601102402-6708 --driver=docker  --container-runtime=containerd
error_spam_test.go:78: (dbg) Done: out/minikube-linux-amd64 start -p nospam-20220601102402-6708 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-20220601102402-6708 --driver=docker  --container-runtime=containerd: (27.670922763s)
--- PASS: TestErrorSpam/setup (27.67s)

                                                
                                    
x
+
TestErrorSpam/start (0.95s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:213: Cleaning up 1 logfile(s) ...
error_spam_test.go:156: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20220601102402-6708 --log_dir /tmp/nospam-20220601102402-6708 start --dry-run
error_spam_test.go:156: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20220601102402-6708 --log_dir /tmp/nospam-20220601102402-6708 start --dry-run
error_spam_test.go:179: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20220601102402-6708 --log_dir /tmp/nospam-20220601102402-6708 start --dry-run
--- PASS: TestErrorSpam/start (0.95s)

                                                
                                    
x
+
TestErrorSpam/status (1.11s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:213: Cleaning up 0 logfile(s) ...
error_spam_test.go:156: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20220601102402-6708 --log_dir /tmp/nospam-20220601102402-6708 status
error_spam_test.go:156: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20220601102402-6708 --log_dir /tmp/nospam-20220601102402-6708 status
error_spam_test.go:179: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20220601102402-6708 --log_dir /tmp/nospam-20220601102402-6708 status
--- PASS: TestErrorSpam/status (1.11s)

                                                
                                    
x
+
TestErrorSpam/pause (2.13s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:213: Cleaning up 0 logfile(s) ...
error_spam_test.go:156: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20220601102402-6708 --log_dir /tmp/nospam-20220601102402-6708 pause
error_spam_test.go:156: (dbg) Done: out/minikube-linux-amd64 -p nospam-20220601102402-6708 --log_dir /tmp/nospam-20220601102402-6708 pause: (1.164377049s)
error_spam_test.go:156: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20220601102402-6708 --log_dir /tmp/nospam-20220601102402-6708 pause
error_spam_test.go:179: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20220601102402-6708 --log_dir /tmp/nospam-20220601102402-6708 pause
--- PASS: TestErrorSpam/pause (2.13s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.52s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:213: Cleaning up 0 logfile(s) ...
error_spam_test.go:156: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20220601102402-6708 --log_dir /tmp/nospam-20220601102402-6708 unpause
error_spam_test.go:156: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20220601102402-6708 --log_dir /tmp/nospam-20220601102402-6708 unpause
error_spam_test.go:179: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20220601102402-6708 --log_dir /tmp/nospam-20220601102402-6708 unpause
--- PASS: TestErrorSpam/unpause (1.52s)

                                                
                                    
x
+
TestErrorSpam/stop (14.87s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:213: Cleaning up 0 logfile(s) ...
error_spam_test.go:156: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20220601102402-6708 --log_dir /tmp/nospam-20220601102402-6708 stop
error_spam_test.go:156: (dbg) Done: out/minikube-linux-amd64 -p nospam-20220601102402-6708 --log_dir /tmp/nospam-20220601102402-6708 stop: (14.609479681s)
error_spam_test.go:156: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20220601102402-6708 --log_dir /tmp/nospam-20220601102402-6708 stop
error_spam_test.go:179: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20220601102402-6708 --log_dir /tmp/nospam-20220601102402-6708 stop
--- PASS: TestErrorSpam/stop (14.87s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1781: local sync path: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files/etc/test/nested/copy/6708/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (75.78s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2160: (dbg) Run:  out/minikube-linux-amd64 start -p functional-20220601102456-6708 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd
functional_test.go:2160: (dbg) Done: out/minikube-linux-amd64 start -p functional-20220601102456-6708 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd: (1m15.77600973s)
--- PASS: TestFunctional/serial/StartWithProxy (75.78s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (15.35s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:651: (dbg) Run:  out/minikube-linux-amd64 start -p functional-20220601102456-6708 --alsologtostderr -v=8
functional_test.go:651: (dbg) Done: out/minikube-linux-amd64 start -p functional-20220601102456-6708 --alsologtostderr -v=8: (15.350766713s)
functional_test.go:655: soft start took 15.351473303s for "functional-20220601102456-6708" cluster.
--- PASS: TestFunctional/serial/SoftStart (15.35s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:673: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.17s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:688: (dbg) Run:  kubectl --context functional-20220601102456-6708 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.17s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.4s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1041: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220601102456-6708 cache add k8s.gcr.io/pause:3.1
functional_test.go:1041: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220601102456-6708 cache add k8s.gcr.io/pause:3.3
functional_test.go:1041: (dbg) Done: out/minikube-linux-amd64 -p functional-20220601102456-6708 cache add k8s.gcr.io/pause:3.3: (1.643369345s)
functional_test.go:1041: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220601102456-6708 cache add k8s.gcr.io/pause:latest
functional_test.go:1041: (dbg) Done: out/minikube-linux-amd64 -p functional-20220601102456-6708 cache add k8s.gcr.io/pause:latest: (1.09512358s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.40s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (2.13s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1069: (dbg) Run:  docker build -t minikube-local-cache-test:functional-20220601102456-6708 /tmp/TestFunctionalserialCacheCmdcacheadd_local3692669222/001
functional_test.go:1081: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220601102456-6708 cache add minikube-local-cache-test:functional-20220601102456-6708
functional_test.go:1081: (dbg) Done: out/minikube-linux-amd64 -p functional-20220601102456-6708 cache add minikube-local-cache-test:functional-20220601102456-6708: (1.846055134s)
functional_test.go:1086: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220601102456-6708 cache delete minikube-local-cache-test:functional-20220601102456-6708
functional_test.go:1075: (dbg) Run:  docker rmi minikube-local-cache-test:functional-20220601102456-6708
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (2.13s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3
functional_test.go:1094: (dbg) Run:  out/minikube-linux-amd64 cache delete k8s.gcr.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1102: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.36s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1116: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220601102456-6708 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.36s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.94s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1139: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220601102456-6708 ssh sudo crictl rmi k8s.gcr.io/pause:latest
functional_test.go:1145: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220601102456-6708 ssh sudo crictl inspecti k8s.gcr.io/pause:latest
functional_test.go:1145: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-20220601102456-6708 ssh sudo crictl inspecti k8s.gcr.io/pause:latest: exit status 1 (356.585814ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "k8s.gcr.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1150: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220601102456-6708 cache reload
functional_test.go:1155: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220601102456-6708 ssh sudo crictl inspecti k8s.gcr.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.94s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1164: (dbg) Run:  out/minikube-linux-amd64 cache delete k8s.gcr.io/pause:3.1
functional_test.go:1164: (dbg) Run:  out/minikube-linux-amd64 cache delete k8s.gcr.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.13s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:708: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220601102456-6708 kubectl -- --context functional-20220601102456-6708 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:733: (dbg) Run:  out/kubectl --context functional-20220601102456-6708 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (43.11s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:749: (dbg) Run:  out/minikube-linux-amd64 start -p functional-20220601102456-6708 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0601 10:27:12.929038    6708 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/addons-20220601102024-6708/client.crt: no such file or directory
E0601 10:27:12.934747    6708 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/addons-20220601102024-6708/client.crt: no such file or directory
E0601 10:27:12.944967    6708 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/addons-20220601102024-6708/client.crt: no such file or directory
E0601 10:27:12.965224    6708 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/addons-20220601102024-6708/client.crt: no such file or directory
E0601 10:27:13.005480    6708 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/addons-20220601102024-6708/client.crt: no such file or directory
E0601 10:27:13.085762    6708 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/addons-20220601102024-6708/client.crt: no such file or directory
E0601 10:27:13.246162    6708 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/addons-20220601102024-6708/client.crt: no such file or directory
E0601 10:27:13.566694    6708 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/addons-20220601102024-6708/client.crt: no such file or directory
E0601 10:27:14.207605    6708 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/addons-20220601102024-6708/client.crt: no such file or directory
E0601 10:27:15.487974    6708 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/addons-20220601102024-6708/client.crt: no such file or directory
E0601 10:27:18.049140    6708 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/addons-20220601102024-6708/client.crt: no such file or directory
functional_test.go:749: (dbg) Done: out/minikube-linux-amd64 start -p functional-20220601102456-6708 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (43.108272209s)
functional_test.go:753: restart took 43.108365263s for "functional-20220601102456-6708" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (43.11s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:802: (dbg) Run:  kubectl --context functional-20220601102456-6708 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:817: etcd phase: Running
functional_test.go:827: etcd status: Ready
functional_test.go:817: kube-apiserver phase: Running
functional_test.go:827: kube-apiserver status: Ready
functional_test.go:817: kube-controller-manager phase: Running
functional_test.go:827: kube-controller-manager status: Ready
functional_test.go:817: kube-scheduler phase: Running
functional_test.go:827: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.08s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1228: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220601102456-6708 logs
functional_test.go:1228: (dbg) Done: out/minikube-linux-amd64 -p functional-20220601102456-6708 logs: (1.077295542s)
--- PASS: TestFunctional/serial/LogsCmd (1.08s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.11s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1242: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220601102456-6708 logs --file /tmp/TestFunctionalserialLogsFileCmd4065536604/001/logs.txt
functional_test.go:1242: (dbg) Done: out/minikube-linux-amd64 -p functional-20220601102456-6708 logs --file /tmp/TestFunctionalserialLogsFileCmd4065536604/001/logs.txt: (1.107507794s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.11s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1191: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220601102456-6708 config unset cpus

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1191: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220601102456-6708 config get cpus
functional_test.go:1191: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-20220601102456-6708 config get cpus: exit status 14 (71.791833ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1191: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220601102456-6708 config set cpus 2
functional_test.go:1191: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220601102456-6708 config get cpus
functional_test.go:1191: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220601102456-6708 config unset cpus
functional_test.go:1191: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220601102456-6708 config get cpus

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1191: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-20220601102456-6708 config get cpus: exit status 14 (71.361406ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (8.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:897: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-20220601102456-6708 --alsologtostderr -v=1]
E0601 10:27:53.891508    6708 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/addons-20220601102024-6708/client.crt: no such file or directory

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:902: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-20220601102456-6708 --alsologtostderr -v=1] ...
helpers_test.go:506: unable to kill pid 41978: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (8.49s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:966: (dbg) Run:  out/minikube-linux-amd64 start -p functional-20220601102456-6708 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:966: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-20220601102456-6708 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (216.676125ms)

                                                
                                                
-- stdout --
	* [functional-20220601102456-6708] minikube v1.26.0-beta.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=14079
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0601 10:27:53.293671   41384 out.go:296] Setting OutFile to fd 1 ...
	I0601 10:27:53.293768   41384 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0601 10:27:53.293776   41384 out.go:309] Setting ErrFile to fd 2...
	I0601 10:27:53.293781   41384 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0601 10:27:53.293885   41384 root.go:322] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/bin
	I0601 10:27:53.294103   41384 out.go:303] Setting JSON to false
	I0601 10:27:53.295219   41384 start.go:115] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":628,"bootTime":1654078646,"procs":563,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.13.0-1027-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0601 10:27:53.295274   41384 start.go:125] virtualization: kvm guest
	I0601 10:27:53.297957   41384 out.go:177] * [functional-20220601102456-6708] minikube v1.26.0-beta.1 on Ubuntu 20.04 (kvm/amd64)
	I0601 10:27:53.299409   41384 out.go:177]   - MINIKUBE_LOCATION=14079
	I0601 10:27:53.300869   41384 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0601 10:27:53.302152   41384 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig
	I0601 10:27:53.303491   41384 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube
	I0601 10:27:53.304985   41384 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0601 10:27:53.306889   41384 config.go:178] Loaded profile config "functional-20220601102456-6708": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.6
	I0601 10:27:53.307445   41384 driver.go:358] Setting default libvirt URI to qemu:///system
	I0601 10:27:53.346402   41384 docker.go:137] docker version: linux-20.10.16
	I0601 10:27:53.346486   41384 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0601 10:27:53.442568   41384 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:77 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:39 SystemTime:2022-06-01 10:27:53.37362995 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1027-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662795776 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:20.10.16 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Client
Info:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0601 10:27:53.442662   41384 docker.go:254] overlay module found
	I0601 10:27:53.444972   41384 out.go:177] * Using the docker driver based on existing profile
	I0601 10:27:53.446178   41384 start.go:284] selected driver: docker
	I0601 10:27:53.446193   41384 start.go:806] validating driver "docker" against &{Name:functional-20220601102456-6708 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:functional-20220601102456-6708 Namespace:de
fault APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision} {Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.23.6 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false helm-tiller:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-
plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0601 10:27:53.446297   41384 start.go:817] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0601 10:27:53.448445   41384 out.go:177] 
	W0601 10:27:53.449820   41384 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0601 10:27:53.451012   41384 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:983: (dbg) Run:  out/minikube-linux-amd64 start -p functional-20220601102456-6708 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
--- PASS: TestFunctional/parallel/DryRun (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1012: (dbg) Run:  out/minikube-linux-amd64 start -p functional-20220601102456-6708 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:1012: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-20220601102456-6708 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (224.650184ms)

                                                
                                                
-- stdout --
	* [functional-20220601102456-6708] minikube v1.26.0-beta.1 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=14079
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0601 10:27:49.981564   40049 out.go:296] Setting OutFile to fd 1 ...
	I0601 10:27:49.981660   40049 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0601 10:27:49.981671   40049 out.go:309] Setting ErrFile to fd 2...
	I0601 10:27:49.981679   40049 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0601 10:27:49.981846   40049 root.go:322] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/bin
	I0601 10:27:49.982095   40049 out.go:303] Setting JSON to false
	I0601 10:27:49.983257   40049 start.go:115] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":624,"bootTime":1654078646,"procs":578,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.13.0-1027-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0601 10:27:49.983326   40049 start.go:125] virtualization: kvm guest
	I0601 10:27:49.985932   40049 out.go:177] * [functional-20220601102456-6708] minikube v1.26.0-beta.1 sur Ubuntu 20.04 (kvm/amd64)
	I0601 10:27:49.987413   40049 out.go:177]   - MINIKUBE_LOCATION=14079
	I0601 10:27:49.989024   40049 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0601 10:27:49.990554   40049 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig
	I0601 10:27:49.992047   40049 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube
	I0601 10:27:49.993624   40049 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0601 10:27:49.995367   40049 config.go:178] Loaded profile config "functional-20220601102456-6708": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.6
	I0601 10:27:49.995743   40049 driver.go:358] Setting default libvirt URI to qemu:///system
	I0601 10:27:50.033682   40049 docker.go:137] docker version: linux-20.10.16
	I0601 10:27:50.033770   40049 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0601 10:27:50.132462   40049 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:77 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:33 OomKillDisable:true NGoroutines:39 SystemTime:2022-06-01 10:27:50.062448231 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1027-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662795776 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:20.10.16 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0601 10:27:50.132572   40049 docker.go:254] overlay module found
	I0601 10:27:50.135117   40049 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0601 10:27:50.136407   40049 start.go:284] selected driver: docker
	I0601 10:27:50.136430   40049 start.go:806] validating driver "docker" against &{Name:functional-20220601102456-6708 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:functional-20220601102456-6708 Namespace:de
fault APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision} {Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.23.6 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false helm-tiller:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-
plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0601 10:27:50.136534   40049 start.go:817] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0601 10:27:50.138688   40049 out.go:177] 
	W0601 10:27:50.140064   40049 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0601 10:27:50.141317   40049 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:846: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220601102456-6708 status
functional_test.go:852: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220601102456-6708 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:864: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220601102456-6708 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.14s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd (11.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd
=== PAUSE TestFunctional/parallel/ServiceCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1432: (dbg) Run:  kubectl --context functional-20220601102456-6708 create deployment hello-node --image=k8s.gcr.io/echoserver:1.8
functional_test.go:1438: (dbg) Run:  kubectl --context functional-20220601102456-6708 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1443: (dbg) TestFunctional/parallel/ServiceCmd: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:342: "hello-node-54fbb85-zfmq5" [bfe947f3-5365-427a-8c27-e13c0ba38c2d] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
helpers_test.go:342: "hello-node-54fbb85-zfmq5" [bfe947f3-5365-427a-8c27-e13c0ba38c2d] Running

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1443: (dbg) TestFunctional/parallel/ServiceCmd: app=hello-node healthy within 8.006292022s
functional_test.go:1448: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220601102456-6708 service list

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1448: (dbg) Done: out/minikube-linux-amd64 -p functional-20220601102456-6708 service list: (1.476469538s)
functional_test.go:1462: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220601102456-6708 service --namespace=default --https --url hello-node
functional_test.go:1475: found endpoint: https://192.168.49.2:32719
functional_test.go:1490: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220601102456-6708 service hello-node --url --format={{.IP}}
functional_test.go:1504: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220601102456-6708 service hello-node --url

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1510: found endpoint for hello-node: http://192.168.49.2:32719
--- PASS: TestFunctional/parallel/ServiceCmd (11.43s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (19.82s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1558: (dbg) Run:  kubectl --context functional-20220601102456-6708 create deployment hello-node-connect --image=k8s.gcr.io/echoserver:1.8
functional_test.go:1564: (dbg) Run:  kubectl --context functional-20220601102456-6708 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1569: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:342: "hello-node-connect-74cf8bc446-pl54n" [61f76138-ea45-44fc-ba65-de2085ed34be] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
E0601 10:27:33.410816    6708 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/addons-20220601102024-6708/client.crt: no such file or directory

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
helpers_test.go:342: "hello-node-connect-74cf8bc446-pl54n" [61f76138-ea45-44fc-ba65-de2085ed34be] Running

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1569: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 19.008230866s
functional_test.go:1578: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220601102456-6708 service hello-node-connect --url

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1584: found endpoint for hello-node-connect: http://192.168.49.2:31768
functional_test.go:1604: http://192.168.49.2:31768: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-74cf8bc446-pl54n

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:31768
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (19.82s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1619: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220601102456-6708 addons list

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1631: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220601102456-6708 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (33.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:342: "storage-provisioner" [24bdd36c-b6ad-45d0-9dad-f450f4ef786b] Running

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.022491283s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-20220601102456-6708 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-20220601102456-6708 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-20220601102456-6708 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-20220601102456-6708 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:342: "sp-pod" [5203a7bc-a8c6-4765-b59c-3f1f0d7d6235] Pending
helpers_test.go:342: "sp-pod" [5203a7bc-a8c6-4765-b59c-3f1f0d7d6235] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
helpers_test.go:342: "sp-pod" [5203a7bc-a8c6-4765-b59c-3f1f0d7d6235] Running

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 18.009203461s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-20220601102456-6708 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-20220601102456-6708 delete -f testdata/storage-provisioner/pod.yaml

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-20220601102456-6708 delete -f testdata/storage-provisioner/pod.yaml: (1.81143836s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-20220601102456-6708 apply -f testdata/storage-provisioner/pod.yaml

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:342: "sp-pod" [0bc580f6-2551-4d23-abe0-0ccabe201933] Pending

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
helpers_test.go:342: "sp-pod" [0bc580f6-2551-4d23-abe0-0ccabe201933] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
helpers_test.go:342: "sp-pod" [0bc580f6-2551-4d23-abe0-0ccabe201933] Running

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 8.008359264s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-20220601102456-6708 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (33.75s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.82s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1654: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220601102456-6708 ssh "echo hello"

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1671: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220601102456-6708 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.82s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:554: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220601102456-6708 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220601102456-6708 ssh -n functional-20220601102456-6708 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220601102456-6708 cp functional-20220601102456-6708:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd3620526535/001/cp-test.txt

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220601102456-6708 ssh -n functional-20220601102456-6708 "sudo cat /home/docker/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.40s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (24.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1719: (dbg) Run:  kubectl --context functional-20220601102456-6708 replace --force -f testdata/mysql.yaml
functional_test.go:1725: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
helpers_test.go:342: "mysql-b87c45988-5pz5q" [2f89bf99-f935-456e-8a55-75161b20caa1] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
helpers_test.go:342: "mysql-b87c45988-5pz5q" [2f89bf99-f935-456e-8a55-75161b20caa1] Running

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1725: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 20.035198224s
functional_test.go:1733: (dbg) Run:  kubectl --context functional-20220601102456-6708 exec mysql-b87c45988-5pz5q -- mysql -ppassword -e "show databases;"
functional_test.go:1733: (dbg) Non-zero exit: kubectl --context functional-20220601102456-6708 exec mysql-b87c45988-5pz5q -- mysql -ppassword -e "show databases;": exit status 1 (313.31515ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1733: (dbg) Run:  kubectl --context functional-20220601102456-6708 exec mysql-b87c45988-5pz5q -- mysql -ppassword -e "show databases;"

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1733: (dbg) Non-zero exit: kubectl --context functional-20220601102456-6708 exec mysql-b87c45988-5pz5q -- mysql -ppassword -e "show databases;": exit status 1 (212.364201ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1733: (dbg) Run:  kubectl --context functional-20220601102456-6708 exec mysql-b87c45988-5pz5q -- mysql -ppassword -e "show databases;"
functional_test.go:1733: (dbg) Non-zero exit: kubectl --context functional-20220601102456-6708 exec mysql-b87c45988-5pz5q -- mysql -ppassword -e "show databases;": exit status 1 (118.603157ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1733: (dbg) Run:  kubectl --context functional-20220601102456-6708 exec mysql-b87c45988-5pz5q -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (24.50s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1855: Checking for existence of /etc/test/nested/copy/6708/hosts within VM

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1857: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220601102456-6708 ssh "sudo cat /etc/test/nested/copy/6708/hosts"

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1862: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1898: Checking for existence of /etc/ssl/certs/6708.pem within VM
functional_test.go:1899: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220601102456-6708 ssh "sudo cat /etc/ssl/certs/6708.pem"

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1898: Checking for existence of /usr/share/ca-certificates/6708.pem within VM
functional_test.go:1899: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220601102456-6708 ssh "sudo cat /usr/share/ca-certificates/6708.pem"

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1898: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1899: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220601102456-6708 ssh "sudo cat /etc/ssl/certs/51391683.0"

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1925: Checking for existence of /etc/ssl/certs/67082.pem within VM
functional_test.go:1926: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220601102456-6708 ssh "sudo cat /etc/ssl/certs/67082.pem"

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1925: Checking for existence of /usr/share/ca-certificates/67082.pem within VM
functional_test.go:1926: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220601102456-6708 ssh "sudo cat /usr/share/ca-certificates/67082.pem"
E0601 10:27:23.169821    6708 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/addons-20220601102024-6708/client.crt: no such file or directory

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1925: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1926: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220601102456-6708 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.61s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:214: (dbg) Run:  kubectl --context functional-20220601102456-6708 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.82s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:1953: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220601102456-6708 ssh "sudo systemctl is-active docker"

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:1953: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-20220601102456-6708 ssh "sudo systemctl is-active docker": exit status 1 (439.273244ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:1953: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220601102456-6708 ssh "sudo systemctl is-active crio"

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:1953: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-20220601102456-6708 ssh "sudo systemctl is-active crio": exit status 1 (380.621625ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.82s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2182: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220601102456-6708 version --short
--- PASS: TestFunctional/parallel/Version/short (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2196: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220601102456-6708 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.69s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2045: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220601102456-6708 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2045: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220601102456-6708 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2045: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220601102456-6708 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220601102456-6708 image ls --format short

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Stdout: out/minikube-linux-amd64 -p functional-20220601102456-6708 image ls --format short:
k8s.gcr.io/pause:latest
k8s.gcr.io/pause:3.6
k8s.gcr.io/pause:3.3
k8s.gcr.io/pause:3.1
k8s.gcr.io/kube-scheduler:v1.23.6
k8s.gcr.io/kube-proxy:v1.23.6
k8s.gcr.io/kube-controller-manager:v1.23.6
k8s.gcr.io/kube-apiserver:v1.23.6
k8s.gcr.io/etcd:3.5.1-0
k8s.gcr.io/echoserver:1.8
k8s.gcr.io/coredns/coredns:v1.8.6
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-20220601102456-6708
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/mysql:5.7
docker.io/library/minikube-local-cache-test:functional-20220601102456-6708
docker.io/kindest/kindnetd:v20210326-1e038dc5
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220601102456-6708 image ls --format table

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Stdout: out/minikube-linux-amd64 -p functional-20220601102456-6708 image ls --format table:
|---------------------------------------------|--------------------------------|---------------|--------|
|                    Image                    |              Tag               |   Image ID    |  Size  |
|---------------------------------------------|--------------------------------|---------------|--------|
| docker.io/kindest/kindnetd                  | v20210326-1e038dc5             | sha256:6de166 | 54MB   |
| gcr.io/google-containers/addon-resizer      | functional-20220601102456-6708 | sha256:ffd4cf | 10.8MB |
| k8s.gcr.io/coredns/coredns                  | v1.8.6                         | sha256:a4ca41 | 13.6MB |
| k8s.gcr.io/echoserver                       | 1.8                            | sha256:82e4c8 | 46.2MB |
| docker.io/library/mysql                     | 5.7                            | sha256:2a0961 | 162MB  |
| docker.io/library/nginx                     | alpine                         | sha256:b1c3ac | 10.2MB |
| gcr.io/k8s-minikube/storage-provisioner     | v5                             | sha256:6e38f4 | 9.06MB |
| k8s.gcr.io/pause                            | 3.1                            | sha256:da86e6 | 353kB  |
| k8s.gcr.io/pause                            | latest                         | sha256:350b16 | 72.3kB |
| k8s.gcr.io/etcd                             | 3.5.1-0                        | sha256:25f8c7 | 98.9MB |
| k8s.gcr.io/kube-apiserver                   | v1.23.6                        | sha256:8fa62c | 32.6MB |
| k8s.gcr.io/kube-proxy                       | v1.23.6                        | sha256:4c0375 | 39.3MB |
| k8s.gcr.io/pause                            | 3.3                            | sha256:0184c1 | 298kB  |
| k8s.gcr.io/pause                            | 3.6                            | sha256:6270bb | 302kB  |
| docker.io/library/minikube-local-cache-test | functional-20220601102456-6708 | sha256:88624d | 1.74kB |
| docker.io/library/nginx                     | latest                         | sha256:0e901e | 56.7MB |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc                   | sha256:56cc51 | 2.4MB  |
| k8s.gcr.io/kube-controller-manager          | v1.23.6                        | sha256:df7b72 | 30.2MB |
| k8s.gcr.io/kube-scheduler                   | v1.23.6                        | sha256:595f32 | 15.1MB |
|---------------------------------------------|--------------------------------|---------------|--------|
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220601102456-6708 image ls --format json

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Stdout: out/minikube-linux-amd64 -p functional-20220601102456-6708 image ls --format json:
[{"id":"sha256:0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["k8s.gcr.io/pause:3.3"],"size":"297686"},{"id":"sha256:88624da481887117461ca0f0c833a3dc6a1ccc0af36d83d34288ac273a83c006","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-20220601102456-6708"],"size":"1739"},{"id":"sha256:2a0961b7de03c7b11afd13fec09d0d30992b6e0b4f947a4aba4273723778ed95","repoDigests":["docker.io/library/mysql@sha256:7e99b2b8d5bca914ef31059858210f57b009c40375d647f0d4d65ecd01d6b1d5"],"repoTags":["docker.io/library/mysql:5.7"],"size":"162466158"},{"id":"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"2395207"},{"id":"sha256:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":["k8s.gcr.io/echoserver@sha256:cb3386f86
3f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969"],"repoTags":["k8s.gcr.io/echoserver:1.8"],"size":"46237695"},{"id":"sha256:4c037545240644e87d79f6b4071331f9adea6176339c98e529b4af8af00d4e47","repoDigests":["k8s.gcr.io/kube-proxy@sha256:cc007fb495f362f18c74e6f5552060c6785ca2b802a5067251de55c7cc880741"],"repoTags":["k8s.gcr.io/kube-proxy:v1.23.6"],"size":"39277919"},{"id":"sha256:25f8c7f3da61c2a810effe5fa779cf80ca171afb0adf94c7cb51eb9a8546629d","repoDigests":["k8s.gcr.io/etcd@sha256:64b9ea357325d5db9f8a723dcf503b5a449177b17ac87d69481e126bb724c263"],"repoTags":["k8s.gcr.io/etcd:3.5.1-0"],"size":"98888614"},{"id":"sha256:8fa62c12256df9d9d0c3f1cf90856e27d90f209f42271c2f19326a705342c3b6","repoDigests":["k8s.gcr.io/kube-apiserver@sha256:0cd8c0bed8d89d914ee5df41e8a40112fb0a28804429c7964296abedc94da9f1"],"repoTags":["k8s.gcr.io/kube-apiserver:v1.23.6"],"size":"32601483"},{"id":"sha256:da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["k8s.gcr.io/pause:3.1"],"size":"3
53405"},{"id":"sha256:6de166512aa223315ff9cfd49bd4f13aab1591cd8fc57e31270f0e4aa34129cb","repoDigests":["docker.io/kindest/kindnetd@sha256:838bc1706e38391aefaa31fd52619fe8e57ad3dfb0d0ff414d902367fcc24c3c"],"repoTags":["docker.io/kindest/kindnetd:v20210326-1e038dc5"],"size":"53960776"},{"id":"sha256:b1c3acb28882519cf6d3a4d7fe2b21d0ae20bde9cfd2c08a7de057f8cfccff15","repoDigests":["docker.io/library/nginx@sha256:a74534e76ee1121d418fa7394ca930eb67440deda413848bc67c68138535b989"],"repoTags":["docker.io/library/nginx:alpine"],"size":"10170636"},{"id":"sha256:0e901e68141fd02f237cf63eb842529f8a9500636a9419e3cf4fb986b8fe3d5d","repoDigests":["docker.io/library/nginx@sha256:2bcabc23b45489fb0885d69a06ba1d648aeda973fae7bb981bafbb884165e514"],"repoTags":["docker.io/library/nginx:latest"],"size":"56746739"},{"id":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":
["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"9058936"},{"id":"sha256:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03","repoDigests":["k8s.gcr.io/coredns/coredns@sha256:5b6ec0d6de9baaf3e92d0f66cd96a25b9edbce8716f5f15dcd1a616b3abd590e"],"repoTags":["k8s.gcr.io/coredns/coredns:v1.8.6"],"size":"13585107"},{"id":"sha256:350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["k8s.gcr.io/pause:latest"],"size":"72306"},{"id":"sha256:ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":[],"repoTags":["gcr.io/google-containers/addon-resizer:functional-20220601102456-6708"],"size":"10823156"},{"id":"sha256:df7b72818ad2e4f1f204c7ffb51239de67f49c6b22671c70354ee5d65ac37657","repoDigests":["k8s.gcr.io/kube-controller-manager@sha256:df94796b78d2285ffe6b231c2b39d25034dde8814de2f75d953a827e77fe6adf"],"repoTags":["k8s.gcr.io/kube-controller-manager:v1.23.6"],"size":"30173645"},{"id":"sha256:595f327f224a42213913a39d224c8aceb96c81ad3909a
e13f6045f570aafe8f0","repoDigests":["k8s.gcr.io/kube-scheduler@sha256:02b4e994459efa49c3e2392733e269893e23d4ac46e92e94107652963caae78b"],"repoTags":["k8s.gcr.io/kube-scheduler:v1.23.6"],"size":"15134087"},{"id":"sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee","repoDigests":["k8s.gcr.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db"],"repoTags":["k8s.gcr.io/pause:3.6"],"size":"301773"}]
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220601102456-6708 image ls --format yaml

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Stdout: out/minikube-linux-amd64 -p functional-20220601102456-6708 image ls --format yaml:
- id: sha256:88624da481887117461ca0f0c833a3dc6a1ccc0af36d83d34288ac273a83c006
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-20220601102456-6708
size: "1739"
- id: sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "2395207"
- id: sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "9058936"
- id: sha256:25f8c7f3da61c2a810effe5fa779cf80ca171afb0adf94c7cb51eb9a8546629d
repoDigests:
- k8s.gcr.io/etcd@sha256:64b9ea357325d5db9f8a723dcf503b5a449177b17ac87d69481e126bb724c263
repoTags:
- k8s.gcr.io/etcd:3.5.1-0
size: "98888614"
- id: sha256:0e901e68141fd02f237cf63eb842529f8a9500636a9419e3cf4fb986b8fe3d5d
repoDigests:
- docker.io/library/nginx@sha256:2bcabc23b45489fb0885d69a06ba1d648aeda973fae7bb981bafbb884165e514
repoTags:
- docker.io/library/nginx:latest
size: "56746739"
- id: sha256:ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests: []
repoTags:
- gcr.io/google-containers/addon-resizer:functional-20220601102456-6708
size: "10823156"
- id: sha256:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03
repoDigests:
- k8s.gcr.io/coredns/coredns@sha256:5b6ec0d6de9baaf3e92d0f66cd96a25b9edbce8716f5f15dcd1a616b3abd590e
repoTags:
- k8s.gcr.io/coredns/coredns:v1.8.6
size: "13585107"
- id: sha256:8fa62c12256df9d9d0c3f1cf90856e27d90f209f42271c2f19326a705342c3b6
repoDigests:
- k8s.gcr.io/kube-apiserver@sha256:0cd8c0bed8d89d914ee5df41e8a40112fb0a28804429c7964296abedc94da9f1
repoTags:
- k8s.gcr.io/kube-apiserver:v1.23.6
size: "32601483"
- id: sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee
repoDigests:
- k8s.gcr.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db
repoTags:
- k8s.gcr.io/pause:3.6
size: "301773"
- id: sha256:6de166512aa223315ff9cfd49bd4f13aab1591cd8fc57e31270f0e4aa34129cb
repoDigests:
- docker.io/kindest/kindnetd@sha256:838bc1706e38391aefaa31fd52619fe8e57ad3dfb0d0ff414d902367fcc24c3c
repoTags:
- docker.io/kindest/kindnetd:v20210326-1e038dc5
size: "53960776"
- id: sha256:2a0961b7de03c7b11afd13fec09d0d30992b6e0b4f947a4aba4273723778ed95
repoDigests:
- docker.io/library/mysql@sha256:7e99b2b8d5bca914ef31059858210f57b009c40375d647f0d4d65ecd01d6b1d5
repoTags:
- docker.io/library/mysql:5.7
size: "162466158"
- id: sha256:4c037545240644e87d79f6b4071331f9adea6176339c98e529b4af8af00d4e47
repoDigests:
- k8s.gcr.io/kube-proxy@sha256:cc007fb495f362f18c74e6f5552060c6785ca2b802a5067251de55c7cc880741
repoTags:
- k8s.gcr.io/kube-proxy:v1.23.6
size: "39277919"
- id: sha256:da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- k8s.gcr.io/pause:3.1
size: "353405"
- id: sha256:0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- k8s.gcr.io/pause:3.3
size: "297686"
- id: sha256:350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- k8s.gcr.io/pause:latest
size: "72306"
- id: sha256:b1c3acb28882519cf6d3a4d7fe2b21d0ae20bde9cfd2c08a7de057f8cfccff15
repoDigests:
- docker.io/library/nginx@sha256:a74534e76ee1121d418fa7394ca930eb67440deda413848bc67c68138535b989
repoTags:
- docker.io/library/nginx:alpine
size: "10170636"
- id: sha256:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- k8s.gcr.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- k8s.gcr.io/echoserver:1.8
size: "46237695"
- id: sha256:df7b72818ad2e4f1f204c7ffb51239de67f49c6b22671c70354ee5d65ac37657
repoDigests:
- k8s.gcr.io/kube-controller-manager@sha256:df94796b78d2285ffe6b231c2b39d25034dde8814de2f75d953a827e77fe6adf
repoTags:
- k8s.gcr.io/kube-controller-manager:v1.23.6
size: "30173645"
- id: sha256:595f327f224a42213913a39d224c8aceb96c81ad3909ae13f6045f570aafe8f0
repoDigests:
- k8s.gcr.io/kube-scheduler@sha256:02b4e994459efa49c3e2392733e269893e23d4ac46e92e94107652963caae78b
repoTags:
- k8s.gcr.io/kube-scheduler:v1.23.6
size: "15134087"

                                                
                                                
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (4.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:303: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220601102456-6708 ssh pgrep buildkitd

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:303: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-20220601102456-6708 ssh pgrep buildkitd: exit status 1 (465.438153ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:310: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220601102456-6708 image build -t localhost/my-image:functional-20220601102456-6708 testdata/build

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:310: (dbg) Done: out/minikube-linux-amd64 -p functional-20220601102456-6708 image build -t localhost/my-image:functional-20220601102456-6708 testdata/build: (3.798946698s)
functional_test.go:318: (dbg) Stderr: out/minikube-linux-amd64 -p functional-20220601102456-6708 image build -t localhost/my-image:functional-20220601102456-6708 testdata/build:
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load .dockerignore
#2 transferring context: 2B done
#2 DONE 0.0s

                                                
                                                
#3 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#3 DONE 1.6s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.1s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.1s done
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0B / 772.79kB 0.2s
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0.0s done
#5 DONE 0.4s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 1.1s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.1s done
#8 exporting manifest sha256:9c47e7207e79c5660b879395c4d61b1779cb5d94d193b7f303d59c33085291e8 done
#8 exporting config sha256:f53e5af801ebf26143953a5c2b23d448073bc394c83a9ca42e0dffbd43537dcb done
#8 naming to localhost/my-image:functional-20220601102456-6708
#8 naming to localhost/my-image:functional-20220601102456-6708 done
#8 DONE 0.1s
functional_test.go:443: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220601102456-6708 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (4.50s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:337: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/Setup
functional_test.go:337: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (1.541976885s)
functional_test.go:342: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-20220601102456-6708
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.58s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1265: (dbg) Run:  out/minikube-linux-amd64 profile lis

                                                
                                                
=== CONT  TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.56s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:127: (dbg) daemon: [out/minikube-linux-amd64 -p functional-20220601102456-6708 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:147: (dbg) Run:  kubectl --context functional-20220601102456-6708 apply -f testdata/testsvc.yaml

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:151: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:342: "nginx-svc" [14d034f4-d0fa-4434-b39d-186e6ee8e16d] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
helpers_test.go:342: "nginx-svc" [14d034f4-d0fa-4434-b39d-186e6ee8e16d] Running

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:151: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 10.014251564s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.20s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1305: (dbg) Run:  out/minikube-linux-amd64 profile list

                                                
                                                
=== CONT  TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: Took "447.348585ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1319: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1324: Took "83.808596ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1356: (dbg) Run:  out/minikube-linux-amd64 profile list -o json

                                                
                                                
=== CONT  TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: Took "423.400579ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1369: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light

                                                
                                                
=== CONT  TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1374: Took "307.076111ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.73s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:350: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220601102456-6708 image load --daemon gcr.io/google-containers/addon-resizer:functional-20220601102456-6708

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:350: (dbg) Done: out/minikube-linux-amd64 -p functional-20220601102456-6708 image load --daemon gcr.io/google-containers/addon-resizer:functional-20220601102456-6708: (4.061339632s)
functional_test.go:443: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220601102456-6708 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (6.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:360: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220601102456-6708 image load --daemon gcr.io/google-containers/addon-resizer:functional-20220601102456-6708

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:360: (dbg) Done: out/minikube-linux-amd64 -p functional-20220601102456-6708 image load --daemon gcr.io/google-containers/addon-resizer:functional-20220601102456-6708: (6.357458282s)
functional_test.go:443: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220601102456-6708 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (6.60s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:169: (dbg) Run:  kubectl --context functional-20220601102456-6708 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:234: tunnel at http://10.109.191.139 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:369: (dbg) stopping [out/minikube-linux-amd64 -p functional-20220601102456-6708 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (6.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:230: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:235: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-20220601102456-6708
functional_test.go:240: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220601102456-6708 image load --daemon gcr.io/google-containers/addon-resizer:functional-20220601102456-6708

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:240: (dbg) Done: out/minikube-linux-amd64 -p functional-20220601102456-6708 image load --daemon gcr.io/google-containers/addon-resizer:functional-20220601102456-6708: (5.955265182s)
functional_test.go:443: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220601102456-6708 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (6.68s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:375: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220601102456-6708 image save gcr.io/google-containers/addon-resizer:functional-20220601102456-6708 /home/jenkins/workspace/Docker_Linux_containerd_integration/addon-resizer-save.tar
functional_test.go:375: (dbg) Done: out/minikube-linux-amd64 -p functional-20220601102456-6708 image save gcr.io/google-containers/addon-resizer:functional-20220601102456-6708 /home/jenkins/workspace/Docker_Linux_containerd_integration/addon-resizer-save.tar: (1.572112572s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.57s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:387: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220601102456-6708 image rm gcr.io/google-containers/addon-resizer:functional-20220601102456-6708
functional_test.go:443: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220601102456-6708 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (2.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:404: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220601102456-6708 image load /home/jenkins/workspace/Docker_Linux_containerd_integration/addon-resizer-save.tar

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:404: (dbg) Done: out/minikube-linux-amd64 -p functional-20220601102456-6708 image load /home/jenkins/workspace/Docker_Linux_containerd_integration/addon-resizer-save.tar: (1.898585021s)
functional_test.go:443: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220601102456-6708 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (2.19s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:414: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-20220601102456-6708
functional_test.go:419: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220601102456-6708 image save --daemon gcr.io/google-containers/addon-resizer:functional-20220601102456-6708

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Done: out/minikube-linux-amd64 -p functional-20220601102456-6708 image save --daemon gcr.io/google-containers/addon-resizer:functional-20220601102456-6708: (1.144606712s)
functional_test.go:424: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-20220601102456-6708
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.22s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (8.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:66: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-20220601102456-6708 /tmp/TestFunctionalparallelMountCmdany-port3326321797/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:100: wrote "test-1654079270145347156" to /tmp/TestFunctionalparallelMountCmdany-port3326321797/001/created-by-test
functional_test_mount_test.go:100: wrote "test-1654079270145347156" to /tmp/TestFunctionalparallelMountCmdany-port3326321797/001/created-by-test-removed-by-pod
functional_test_mount_test.go:100: wrote "test-1654079270145347156" to /tmp/TestFunctionalparallelMountCmdany-port3326321797/001/test-1654079270145347156
functional_test_mount_test.go:108: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220601102456-6708 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:108: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-20220601102456-6708 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (346.715462ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:108: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220601102456-6708 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:122: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220601102456-6708 ssh -- ls -la /mount-9p
functional_test_mount_test.go:126: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Jun  1 10:27 created-by-test
-rw-r--r-- 1 docker docker 24 Jun  1 10:27 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Jun  1 10:27 test-1654079270145347156
functional_test_mount_test.go:130: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220601102456-6708 ssh cat /mount-9p/test-1654079270145347156

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:141: (dbg) Run:  kubectl --context functional-20220601102456-6708 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:146: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:342: "busybox-mount" [1cccd8af-2bd8-4418-bed7-af4677921263] Pending

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
helpers_test.go:342: "busybox-mount" [1cccd8af-2bd8-4418-bed7-af4677921263] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
helpers_test.go:342: "busybox-mount" [1cccd8af-2bd8-4418-bed7-af4677921263] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
helpers_test.go:342: "busybox-mount" [1cccd8af-2bd8-4418-bed7-af4677921263] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:146: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.006182418s
functional_test_mount_test.go:162: (dbg) Run:  kubectl --context functional-20220601102456-6708 logs busybox-mount
functional_test_mount_test.go:174: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220601102456-6708 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:174: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220601102456-6708 ssh stat /mount-9p/created-by-pod

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:83: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220601102456-6708 ssh "sudo umount -f /mount-9p"

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:87: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-20220601102456-6708 /tmp/TestFunctionalparallelMountCmdany-port3326321797/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (8.29s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:206: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-20220601102456-6708 /tmp/TestFunctionalparallelMountCmdspecific-port1387334528/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:236: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220601102456-6708 ssh "findmnt -T /mount-9p | grep 9p"

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:236: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-20220601102456-6708 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (470.22721ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:236: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220601102456-6708 ssh "findmnt -T /mount-9p | grep 9p"

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:250: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220601102456-6708 ssh -- ls -la /mount-9p
functional_test_mount_test.go:254: guest mount directory contents
total 0
functional_test_mount_test.go:256: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-20220601102456-6708 /tmp/TestFunctionalparallelMountCmdspecific-port1387334528/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:257: reading mount text
functional_test_mount_test.go:271: done reading mount text
functional_test_mount_test.go:223: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220601102456-6708 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:223: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-20220601102456-6708 ssh "sudo umount -f /mount-9p": exit status 1 (392.383227ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:225: "out/minikube-linux-amd64 -p functional-20220601102456-6708 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:227: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-20220601102456-6708 /tmp/TestFunctionalparallelMountCmdspecific-port1387334528/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
2022/06/01 10:28:02 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.24s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.1s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:185: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:185: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-20220601102456-6708
--- PASS: TestFunctional/delete_addon-resizer_images (0.10s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.03s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:193: (dbg) Run:  docker rmi -f localhost/my-image:functional-20220601102456-6708
--- PASS: TestFunctional/delete_my-image_image (0.03s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.03s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:201: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-20220601102456-6708
--- PASS: TestFunctional/delete_minikube_cached_images (0.03s)

                                                
                                    
x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (74.74s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run:  out/minikube-linux-amd64 start -p ingress-addon-legacy-20220601102806-6708 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
E0601 10:28:34.851661    6708 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/addons-20220601102024-6708/client.crt: no such file or directory
ingress_addon_legacy_test.go:39: (dbg) Done: out/minikube-linux-amd64 start -p ingress-addon-legacy-20220601102806-6708 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (1m14.743633006s)
--- PASS: TestIngressAddonLegacy/StartLegacyK8sCluster (74.74s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (9.75s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:70: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-20220601102806-6708 addons enable ingress --alsologtostderr -v=5
ingress_addon_legacy_test.go:70: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-20220601102806-6708 addons enable ingress --alsologtostderr -v=5: (9.75483916s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (9.75s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.4s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:79: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-20220601102806-6708 addons enable ingress-dns --alsologtostderr -v=5
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.40s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (31.51s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:162: (dbg) Run:  kubectl --context ingress-addon-legacy-20220601102806-6708 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:162: (dbg) Done: kubectl --context ingress-addon-legacy-20220601102806-6708 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s: (8.438749611s)
addons_test.go:182: (dbg) Run:  kubectl --context ingress-addon-legacy-20220601102806-6708 replace --force -f testdata/nginx-ingress-v1beta1.yaml
addons_test.go:195: (dbg) Run:  kubectl --context ingress-addon-legacy-20220601102806-6708 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:200: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:342: "nginx" [f626c3d6-64f3-4b3d-b738-0603b69144e4] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:342: "nginx" [f626c3d6-64f3-4b3d-b738-0603b69144e4] Running
addons_test.go:200: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: run=nginx healthy within 10.009016398s
addons_test.go:212: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-20220601102806-6708 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:236: (dbg) Run:  kubectl --context ingress-addon-legacy-20220601102806-6708 replace --force -f testdata/ingress-dns-example-v1beta1.yaml
addons_test.go:241: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-20220601102806-6708 ip
addons_test.go:247: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-20220601102806-6708 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-20220601102806-6708 addons disable ingress-dns --alsologtostderr -v=1: (4.537881898s)
addons_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-20220601102806-6708 addons disable ingress --alsologtostderr -v=1
E0601 10:29:56.772756    6708 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/addons-20220601102024-6708/client.crt: no such file or directory
addons_test.go:261: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-20220601102806-6708 addons disable ingress --alsologtostderr -v=1: (7.26458468s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddons (31.51s)

                                                
                                    
x
+
TestJSONOutput/start/Command (46.76s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-20220601103005-6708 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=containerd
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-20220601103005-6708 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=containerd: (46.757773485s)
--- PASS: TestJSONOutput/start/Command (46.76s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.67s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-20220601103005-6708 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.67s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.64s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-20220601103005-6708 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.64s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (15.71s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-20220601103005-6708 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-20220601103005-6708 --output=json --user=testUser: (15.705450875s)
--- PASS: TestJSONOutput/stop/Command (15.71s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.3s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:149: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-20220601103114-6708 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-20220601103114-6708 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (82.134299ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"0df161aa-f729-4699-ba86-52246d8ed752","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-20220601103114-6708] minikube v1.26.0-beta.1 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"37c49b2c-e6e7-4ab7-96b5-e3c2edea495a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=14079"}}
	{"specversion":"1.0","id":"d9918421-bc4a-41f6-aa8a-b1cf87394dff","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"7e87399d-69de-48dd-b381-08c453a4e42c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig"}}
	{"specversion":"1.0","id":"07dd586f-3b55-471c-b138-205f0e5fe59c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube"}}
	{"specversion":"1.0","id":"0e25e8a9-3022-47f9-8594-875043554bcb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"41d11ae3-3a63-4f73-81ea-8bfcdb86ecbd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-20220601103114-6708" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-20220601103114-6708
--- PASS: TestErrorJSONOutput (0.30s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (32.56s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-20220601103114-6708 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-20220601103114-6708 --network=: (30.353521695s)
kic_custom_network_test.go:122: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-20220601103114-6708" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-20220601103114-6708
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-20220601103114-6708: (2.179497467s)
--- PASS: TestKicCustomNetwork/create_custom_network (32.56s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (25.88s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-20220601103147-6708 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-20220601103147-6708 --network=bridge: (23.785499005s)
kic_custom_network_test.go:122: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-20220601103147-6708" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-20220601103147-6708
E0601 10:32:12.929075    6708 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/addons-20220601102024-6708/client.crt: no such file or directory
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-20220601103147-6708: (2.067891984s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (25.88s)

                                                
                                    
x
+
TestKicExistingNetwork (27.46s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:122: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-amd64 start -p existing-network-20220601103213-6708 --network=existing-network
E0601 10:32:21.870497    6708 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/functional-20220601102456-6708/client.crt: no such file or directory
E0601 10:32:21.875744    6708 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/functional-20220601102456-6708/client.crt: no such file or directory
E0601 10:32:21.885980    6708 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/functional-20220601102456-6708/client.crt: no such file or directory
E0601 10:32:21.906229    6708 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/functional-20220601102456-6708/client.crt: no such file or directory
E0601 10:32:21.946454    6708 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/functional-20220601102456-6708/client.crt: no such file or directory
E0601 10:32:22.026750    6708 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/functional-20220601102456-6708/client.crt: no such file or directory
E0601 10:32:22.187153    6708 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/functional-20220601102456-6708/client.crt: no such file or directory
E0601 10:32:22.507707    6708 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/functional-20220601102456-6708/client.crt: no such file or directory
E0601 10:32:23.148599    6708 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/functional-20220601102456-6708/client.crt: no such file or directory
E0601 10:32:24.429240    6708 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/functional-20220601102456-6708/client.crt: no such file or directory
E0601 10:32:26.990648    6708 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/functional-20220601102456-6708/client.crt: no such file or directory
E0601 10:32:32.110811    6708 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/functional-20220601102456-6708/client.crt: no such file or directory
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-amd64 start -p existing-network-20220601103213-6708 --network=existing-network: (25.014275753s)
helpers_test.go:175: Cleaning up "existing-network-20220601103213-6708" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p existing-network-20220601103213-6708
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p existing-network-20220601103213-6708: (2.230042543s)
--- PASS: TestKicExistingNetwork (27.46s)

                                                
                                    
x
+
TestKicCustomSubnet (26.59s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-subnet-20220601103240-6708 --subnet=192.168.60.0/24
E0601 10:32:40.613301    6708 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/addons-20220601102024-6708/client.crt: no such file or directory
E0601 10:32:42.351924    6708 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/functional-20220601102456-6708/client.crt: no such file or directory
E0601 10:33:02.833101    6708 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/functional-20220601102456-6708/client.crt: no such file or directory
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-subnet-20220601103240-6708 --subnet=192.168.60.0/24: (24.349778728s)
kic_custom_network_test.go:133: (dbg) Run:  docker network inspect custom-subnet-20220601103240-6708 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-20220601103240-6708" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p custom-subnet-20220601103240-6708
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p custom-subnet-20220601103240-6708: (2.211614736s)
--- PASS: TestKicCustomSubnet (26.59s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (69.93s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-20220601103307-6708 --driver=docker  --container-runtime=containerd
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-20220601103307-6708 --driver=docker  --container-runtime=containerd: (31.522514457s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-20220601103307-6708 --driver=docker  --container-runtime=containerd
E0601 10:33:43.793889    6708 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/functional-20220601102456-6708/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-20220601103307-6708 --driver=docker  --container-runtime=containerd: (32.395874412s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-20220601103307-6708
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-20220601103307-6708
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-20220601103307-6708" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-20220601103307-6708
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p second-20220601103307-6708: (2.391058119s)
helpers_test.go:175: Cleaning up "first-20220601103307-6708" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-20220601103307-6708
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p first-20220601103307-6708: (2.307967111s)
--- PASS: TestMinikubeProfile (69.93s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (4.89s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-20220601103416-6708 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-20220601103416-6708 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (3.89342089s)
--- PASS: TestMountStart/serial/StartWithMountFirst (4.89s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.33s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-20220601103416-6708 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.33s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (4.85s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-20220601103416-6708 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-20220601103416-6708 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (3.853306536s)
--- PASS: TestMountStart/serial/StartWithMountSecond (4.85s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.34s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-20220601103416-6708 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.34s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.82s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-20220601103416-6708 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p mount-start-1-20220601103416-6708 --alsologtostderr -v=5: (1.822564798s)
--- PASS: TestMountStart/serial/DeleteFirst (1.82s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.34s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-20220601103416-6708 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.34s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.25s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-20220601103416-6708
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-20220601103416-6708: (1.253395242s)
--- PASS: TestMountStart/serial/Stop (1.25s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (6.55s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-20220601103416-6708
E0601 10:34:31.087439    6708 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/ingress-addon-legacy-20220601102806-6708/client.crt: no such file or directory
E0601 10:34:31.092703    6708 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/ingress-addon-legacy-20220601102806-6708/client.crt: no such file or directory
E0601 10:34:31.102935    6708 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/ingress-addon-legacy-20220601102806-6708/client.crt: no such file or directory
E0601 10:34:31.123889    6708 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/ingress-addon-legacy-20220601102806-6708/client.crt: no such file or directory
E0601 10:34:31.164145    6708 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/ingress-addon-legacy-20220601102806-6708/client.crt: no such file or directory
E0601 10:34:31.244414    6708 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/ingress-addon-legacy-20220601102806-6708/client.crt: no such file or directory
E0601 10:34:31.405108    6708 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/ingress-addon-legacy-20220601102806-6708/client.crt: no such file or directory
E0601 10:34:31.725664    6708 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/ingress-addon-legacy-20220601102806-6708/client.crt: no such file or directory
E0601 10:34:32.366595    6708 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/ingress-addon-legacy-20220601102806-6708/client.crt: no such file or directory
E0601 10:34:33.647141    6708 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/ingress-addon-legacy-20220601102806-6708/client.crt: no such file or directory
E0601 10:34:36.207971    6708 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/ingress-addon-legacy-20220601102806-6708/client.crt: no such file or directory
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-20220601103416-6708: (5.551156431s)
--- PASS: TestMountStart/serial/RestartStopped (6.55s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.32s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-20220601103416-6708 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.32s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (75.41s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-20220601103439-6708 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd
E0601 10:34:41.328970    6708 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/ingress-addon-legacy-20220601102806-6708/client.crt: no such file or directory
E0601 10:34:51.569610    6708 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/ingress-addon-legacy-20220601102806-6708/client.crt: no such file or directory
E0601 10:35:05.745872    6708 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/functional-20220601102456-6708/client.crt: no such file or directory
E0601 10:35:12.049843    6708 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/ingress-addon-legacy-20220601102806-6708/client.crt: no such file or directory
E0601 10:35:53.010455    6708 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/ingress-addon-legacy-20220601102806-6708/client.crt: no such file or directory
multinode_test.go:83: (dbg) Done: out/minikube-linux-amd64 start -p multinode-20220601103439-6708 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd: (1m14.846097983s)
multinode_test.go:89: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220601103439-6708 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (75.41s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (4.54s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20220601103439-6708 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20220601103439-6708 -- rollout status deployment/busybox
multinode_test.go:484: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-20220601103439-6708 -- rollout status deployment/busybox: (2.993064217s)
multinode_test.go:490: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20220601103439-6708 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:502: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20220601103439-6708 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:510: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20220601103439-6708 -- exec busybox-7978565885-29pzk -- nslookup kubernetes.io
multinode_test.go:510: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20220601103439-6708 -- exec busybox-7978565885-js59f -- nslookup kubernetes.io
multinode_test.go:520: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20220601103439-6708 -- exec busybox-7978565885-29pzk -- nslookup kubernetes.default
multinode_test.go:520: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20220601103439-6708 -- exec busybox-7978565885-js59f -- nslookup kubernetes.default
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20220601103439-6708 -- exec busybox-7978565885-29pzk -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20220601103439-6708 -- exec busybox-7978565885-js59f -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (4.54s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.82s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:538: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20220601103439-6708 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20220601103439-6708 -- exec busybox-7978565885-29pzk -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20220601103439-6708 -- exec busybox-7978565885-29pzk -- sh -c "ping -c 1 192.168.49.1"
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20220601103439-6708 -- exec busybox-7978565885-js59f -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20220601103439-6708 -- exec busybox-7978565885-js59f -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.82s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (30.97s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:108: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-20220601103439-6708 -v 3 --alsologtostderr
multinode_test.go:108: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-20220601103439-6708 -v 3 --alsologtostderr: (30.229336566s)
multinode_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220601103439-6708 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (30.97s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.36s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:130: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.36s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (11.72s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220601103439-6708 status --output json --alsologtostderr
helpers_test.go:554: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220601103439-6708 cp testdata/cp-test.txt multinode-20220601103439-6708:/home/docker/cp-test.txt
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220601103439-6708 ssh -n multinode-20220601103439-6708 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220601103439-6708 cp multinode-20220601103439-6708:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2364937881/001/cp-test_multinode-20220601103439-6708.txt
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220601103439-6708 ssh -n multinode-20220601103439-6708 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220601103439-6708 cp multinode-20220601103439-6708:/home/docker/cp-test.txt multinode-20220601103439-6708-m02:/home/docker/cp-test_multinode-20220601103439-6708_multinode-20220601103439-6708-m02.txt
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220601103439-6708 ssh -n multinode-20220601103439-6708 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220601103439-6708 ssh -n multinode-20220601103439-6708-m02 "sudo cat /home/docker/cp-test_multinode-20220601103439-6708_multinode-20220601103439-6708-m02.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220601103439-6708 cp multinode-20220601103439-6708:/home/docker/cp-test.txt multinode-20220601103439-6708-m03:/home/docker/cp-test_multinode-20220601103439-6708_multinode-20220601103439-6708-m03.txt
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220601103439-6708 ssh -n multinode-20220601103439-6708 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220601103439-6708 ssh -n multinode-20220601103439-6708-m03 "sudo cat /home/docker/cp-test_multinode-20220601103439-6708_multinode-20220601103439-6708-m03.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220601103439-6708 cp testdata/cp-test.txt multinode-20220601103439-6708-m02:/home/docker/cp-test.txt
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220601103439-6708 ssh -n multinode-20220601103439-6708-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220601103439-6708 cp multinode-20220601103439-6708-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2364937881/001/cp-test_multinode-20220601103439-6708-m02.txt
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220601103439-6708 ssh -n multinode-20220601103439-6708-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220601103439-6708 cp multinode-20220601103439-6708-m02:/home/docker/cp-test.txt multinode-20220601103439-6708:/home/docker/cp-test_multinode-20220601103439-6708-m02_multinode-20220601103439-6708.txt
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220601103439-6708 ssh -n multinode-20220601103439-6708-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220601103439-6708 ssh -n multinode-20220601103439-6708 "sudo cat /home/docker/cp-test_multinode-20220601103439-6708-m02_multinode-20220601103439-6708.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220601103439-6708 cp multinode-20220601103439-6708-m02:/home/docker/cp-test.txt multinode-20220601103439-6708-m03:/home/docker/cp-test_multinode-20220601103439-6708-m02_multinode-20220601103439-6708-m03.txt
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220601103439-6708 ssh -n multinode-20220601103439-6708-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220601103439-6708 ssh -n multinode-20220601103439-6708-m03 "sudo cat /home/docker/cp-test_multinode-20220601103439-6708-m02_multinode-20220601103439-6708-m03.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220601103439-6708 cp testdata/cp-test.txt multinode-20220601103439-6708-m03:/home/docker/cp-test.txt
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220601103439-6708 ssh -n multinode-20220601103439-6708-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220601103439-6708 cp multinode-20220601103439-6708-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2364937881/001/cp-test_multinode-20220601103439-6708-m03.txt
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220601103439-6708 ssh -n multinode-20220601103439-6708-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220601103439-6708 cp multinode-20220601103439-6708-m03:/home/docker/cp-test.txt multinode-20220601103439-6708:/home/docker/cp-test_multinode-20220601103439-6708-m03_multinode-20220601103439-6708.txt
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220601103439-6708 ssh -n multinode-20220601103439-6708-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220601103439-6708 ssh -n multinode-20220601103439-6708 "sudo cat /home/docker/cp-test_multinode-20220601103439-6708-m03_multinode-20220601103439-6708.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220601103439-6708 cp multinode-20220601103439-6708-m03:/home/docker/cp-test.txt multinode-20220601103439-6708-m02:/home/docker/cp-test_multinode-20220601103439-6708-m03_multinode-20220601103439-6708-m02.txt
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220601103439-6708 ssh -n multinode-20220601103439-6708-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220601103439-6708 ssh -n multinode-20220601103439-6708-m02 "sudo cat /home/docker/cp-test_multinode-20220601103439-6708-m03_multinode-20220601103439-6708-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (11.72s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.44s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:208: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220601103439-6708 node stop m03
multinode_test.go:208: (dbg) Done: out/minikube-linux-amd64 -p multinode-20220601103439-6708 node stop m03: (1.262255652s)
multinode_test.go:214: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220601103439-6708 status
multinode_test.go:214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-20220601103439-6708 status: exit status 7 (587.383991ms)

                                                
                                                
-- stdout --
	multinode-20220601103439-6708
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-20220601103439-6708-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-20220601103439-6708-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:221: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220601103439-6708 status --alsologtostderr
multinode_test.go:221: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-20220601103439-6708 status --alsologtostderr: exit status 7 (593.174353ms)

                                                
                                                
-- stdout --
	multinode-20220601103439-6708
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-20220601103439-6708-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-20220601103439-6708-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0601 10:36:45.388756   95051 out.go:296] Setting OutFile to fd 1 ...
	I0601 10:36:45.388894   95051 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0601 10:36:45.388906   95051 out.go:309] Setting ErrFile to fd 2...
	I0601 10:36:45.388914   95051 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0601 10:36:45.389033   95051 root.go:322] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/bin
	I0601 10:36:45.389211   95051 out.go:303] Setting JSON to false
	I0601 10:36:45.389232   95051 mustload.go:65] Loading cluster: multinode-20220601103439-6708
	I0601 10:36:45.389539   95051 config.go:178] Loaded profile config "multinode-20220601103439-6708": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.6
	I0601 10:36:45.389557   95051 status.go:253] checking status of multinode-20220601103439-6708 ...
	I0601 10:36:45.389933   95051 cli_runner.go:164] Run: docker container inspect multinode-20220601103439-6708 --format={{.State.Status}}
	I0601 10:36:45.421493   95051 status.go:328] multinode-20220601103439-6708 host status = "Running" (err=<nil>)
	I0601 10:36:45.421522   95051 host.go:66] Checking if "multinode-20220601103439-6708" exists ...
	I0601 10:36:45.421791   95051 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-20220601103439-6708
	I0601 10:36:45.454823   95051 host.go:66] Checking if "multinode-20220601103439-6708" exists ...
	I0601 10:36:45.455193   95051 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0601 10:36:45.455260   95051 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220601103439-6708
	I0601 10:36:45.486427   95051 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49227 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/multinode-20220601103439-6708/id_rsa Username:docker}
	I0601 10:36:45.568226   95051 ssh_runner.go:195] Run: systemctl --version
	I0601 10:36:45.571618   95051 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0601 10:36:45.580050   95051 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0601 10:36:45.678449   95051 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:76 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:43 OomKillDisable:true NGoroutines:44 SystemTime:2022-06-01 10:36:45.608596252 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1027-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662795776 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:20.10.16 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0601 10:36:45.679170   95051 kubeconfig.go:92] found "multinode-20220601103439-6708" server: "https://192.168.49.2:8443"
	I0601 10:36:45.679197   95051 api_server.go:165] Checking apiserver status ...
	I0601 10:36:45.679227   95051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 10:36:45.688246   95051 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1205/cgroup
	I0601 10:36:45.695297   95051 api_server.go:181] apiserver freezer: "9:freezer:/docker/0b3a5cf39ad0850fcb2fbfac463270cac8ca343ba20055816307386f1b217dbe/kubepods/burstable/podefe6952c5f055e4e7c88c9c8948205c3/472a81cf5e390f6da300b6e32627009de7870445d8faebdada5755842694a45c"
	I0601 10:36:45.695349   95051 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/0b3a5cf39ad0850fcb2fbfac463270cac8ca343ba20055816307386f1b217dbe/kubepods/burstable/podefe6952c5f055e4e7c88c9c8948205c3/472a81cf5e390f6da300b6e32627009de7870445d8faebdada5755842694a45c/freezer.state
	I0601 10:36:45.701563   95051 api_server.go:203] freezer state: "THAWED"
	I0601 10:36:45.701586   95051 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0601 10:36:45.705962   95051 api_server.go:266] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0601 10:36:45.705985   95051 status.go:419] multinode-20220601103439-6708 apiserver status = Running (err=<nil>)
	I0601 10:36:45.705997   95051 status.go:255] multinode-20220601103439-6708 status: &{Name:multinode-20220601103439-6708 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0601 10:36:45.706016   95051 status.go:253] checking status of multinode-20220601103439-6708-m02 ...
	I0601 10:36:45.706270   95051 cli_runner.go:164] Run: docker container inspect multinode-20220601103439-6708-m02 --format={{.State.Status}}
	I0601 10:36:45.737399   95051 status.go:328] multinode-20220601103439-6708-m02 host status = "Running" (err=<nil>)
	I0601 10:36:45.737421   95051 host.go:66] Checking if "multinode-20220601103439-6708-m02" exists ...
	I0601 10:36:45.737666   95051 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-20220601103439-6708-m02
	I0601 10:36:45.767740   95051 host.go:66] Checking if "multinode-20220601103439-6708-m02" exists ...
	I0601 10:36:45.768025   95051 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0601 10:36:45.768071   95051 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220601103439-6708-m02
	I0601 10:36:45.798217   95051 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49232 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/multinode-20220601103439-6708-m02/id_rsa Username:docker}
	I0601 10:36:45.880278   95051 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0601 10:36:45.889037   95051 status.go:255] multinode-20220601103439-6708-m02 status: &{Name:multinode-20220601103439-6708-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0601 10:36:45.889075   95051 status.go:253] checking status of multinode-20220601103439-6708-m03 ...
	I0601 10:36:45.889317   95051 cli_runner.go:164] Run: docker container inspect multinode-20220601103439-6708-m03 --format={{.State.Status}}
	I0601 10:36:45.920198   95051 status.go:328] multinode-20220601103439-6708-m03 host status = "Stopped" (err=<nil>)
	I0601 10:36:45.920219   95051 status.go:341] host is not running, skipping remaining checks
	I0601 10:36:45.920225   95051 status.go:255] multinode-20220601103439-6708-m03 status: &{Name:multinode-20220601103439-6708-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.44s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (36.27s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:242: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:252: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220601103439-6708 node start m03 --alsologtostderr
E0601 10:37:12.928827    6708 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/addons-20220601102024-6708/client.crt: no such file or directory
E0601 10:37:14.931920    6708 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/ingress-addon-legacy-20220601102806-6708/client.crt: no such file or directory
multinode_test.go:252: (dbg) Done: out/minikube-linux-amd64 -p multinode-20220601103439-6708 node start m03 --alsologtostderr: (35.451942567s)
multinode_test.go:259: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220601103439-6708 status
E0601 10:37:21.870660    6708 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/functional-20220601102456-6708/client.crt: no such file or directory
multinode_test.go:273: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (36.27s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (186.56s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:281: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-20220601103439-6708
multinode_test.go:288: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-20220601103439-6708
E0601 10:37:49.586312    6708 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/functional-20220601102456-6708/client.crt: no such file or directory
multinode_test.go:288: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-20220601103439-6708: (41.208427246s)
multinode_test.go:293: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-20220601103439-6708 --wait=true -v=8 --alsologtostderr
E0601 10:39:31.087444    6708 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/ingress-addon-legacy-20220601102806-6708/client.crt: no such file or directory
E0601 10:39:58.772374    6708 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/ingress-addon-legacy-20220601102806-6708/client.crt: no such file or directory
multinode_test.go:293: (dbg) Done: out/minikube-linux-amd64 start -p multinode-20220601103439-6708 --wait=true -v=8 --alsologtostderr: (2m25.224945623s)
multinode_test.go:298: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-20220601103439-6708
--- PASS: TestMultiNode/serial/RestartKeepsNodes (186.56s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.18s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:392: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220601103439-6708 node delete m03
multinode_test.go:392: (dbg) Done: out/minikube-linux-amd64 -p multinode-20220601103439-6708 node delete m03: (4.481308724s)
multinode_test.go:398: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220601103439-6708 status --alsologtostderr
multinode_test.go:412: (dbg) Run:  docker volume ls
multinode_test.go:422: (dbg) Run:  kubectl get nodes
multinode_test.go:430: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.18s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (40.29s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:312: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220601103439-6708 stop
multinode_test.go:312: (dbg) Done: out/minikube-linux-amd64 -p multinode-20220601103439-6708 stop: (40.043024492s)
multinode_test.go:318: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220601103439-6708 status
multinode_test.go:318: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-20220601103439-6708 status: exit status 7 (122.209602ms)

                                                
                                                
-- stdout --
	multinode-20220601103439-6708
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-20220601103439-6708-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220601103439-6708 status --alsologtostderr
multinode_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-20220601103439-6708 status --alsologtostderr: exit status 7 (124.363994ms)

                                                
                                                
-- stdout --
	multinode-20220601103439-6708
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-20220601103439-6708-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0601 10:41:14.160187  105526 out.go:296] Setting OutFile to fd 1 ...
	I0601 10:41:14.160335  105526 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0601 10:41:14.160344  105526 out.go:309] Setting ErrFile to fd 2...
	I0601 10:41:14.160348  105526 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0601 10:41:14.160453  105526 root.go:322] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/bin
	I0601 10:41:14.160593  105526 out.go:303] Setting JSON to false
	I0601 10:41:14.160610  105526 mustload.go:65] Loading cluster: multinode-20220601103439-6708
	I0601 10:41:14.160938  105526 config.go:178] Loaded profile config "multinode-20220601103439-6708": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.6
	I0601 10:41:14.160953  105526 status.go:253] checking status of multinode-20220601103439-6708 ...
	I0601 10:41:14.161314  105526 cli_runner.go:164] Run: docker container inspect multinode-20220601103439-6708 --format={{.State.Status}}
	I0601 10:41:14.192361  105526 status.go:328] multinode-20220601103439-6708 host status = "Stopped" (err=<nil>)
	I0601 10:41:14.192398  105526 status.go:341] host is not running, skipping remaining checks
	I0601 10:41:14.192405  105526 status.go:255] multinode-20220601103439-6708 status: &{Name:multinode-20220601103439-6708 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0601 10:41:14.192459  105526 status.go:253] checking status of multinode-20220601103439-6708-m02 ...
	I0601 10:41:14.192704  105526 cli_runner.go:164] Run: docker container inspect multinode-20220601103439-6708-m02 --format={{.State.Status}}
	I0601 10:41:14.224749  105526 status.go:328] multinode-20220601103439-6708-m02 host status = "Stopped" (err=<nil>)
	I0601 10:41:14.224772  105526 status.go:341] host is not running, skipping remaining checks
	I0601 10:41:14.224782  105526 status.go:255] multinode-20220601103439-6708-m02 status: &{Name:multinode-20220601103439-6708-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (40.29s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (90.85s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:342: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:352: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-20220601103439-6708 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd
E0601 10:42:12.929099    6708 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/addons-20220601102024-6708/client.crt: no such file or directory
E0601 10:42:21.870536    6708 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/functional-20220601102456-6708/client.crt: no such file or directory
multinode_test.go:352: (dbg) Done: out/minikube-linux-amd64 start -p multinode-20220601103439-6708 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd: (1m30.159066613s)
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220601103439-6708 status --alsologtostderr
multinode_test.go:372: (dbg) Run:  kubectl get nodes
multinode_test.go:380: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (90.85s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (32.31s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:441: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-20220601103439-6708
multinode_test.go:450: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-20220601103439-6708-m02 --driver=docker  --container-runtime=containerd
multinode_test.go:450: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-20220601103439-6708-m02 --driver=docker  --container-runtime=containerd: exit status 14 (81.283981ms)

                                                
                                                
-- stdout --
	* [multinode-20220601103439-6708-m02] minikube v1.26.0-beta.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=14079
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-20220601103439-6708-m02' is duplicated with machine name 'multinode-20220601103439-6708-m02' in profile 'multinode-20220601103439-6708'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:458: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-20220601103439-6708-m03 --driver=docker  --container-runtime=containerd
multinode_test.go:458: (dbg) Done: out/minikube-linux-amd64 start -p multinode-20220601103439-6708-m03 --driver=docker  --container-runtime=containerd: (29.271946373s)
multinode_test.go:465: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-20220601103439-6708
multinode_test.go:465: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-20220601103439-6708: exit status 80 (336.475776ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-20220601103439-6708
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: Node multinode-20220601103439-6708-m03 already exists in multinode-20220601103439-6708-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:470: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-20220601103439-6708-m03
multinode_test.go:470: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-20220601103439-6708-m03: (2.562342597s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (32.31s)

                                                
                                    
x
+
TestPreload (190.36s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:48: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-20220601104321-6708 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.17.0
E0601 10:43:35.974253    6708 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/addons-20220601102024-6708/client.crt: no such file or directory
E0601 10:44:31.088012    6708 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/ingress-addon-legacy-20220601102806-6708/client.crt: no such file or directory
preload_test.go:48: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-20220601104321-6708 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.17.0: (1m25.309062502s)
preload_test.go:61: (dbg) Run:  out/minikube-linux-amd64 ssh -p test-preload-20220601104321-6708 -- sudo crictl pull gcr.io/k8s-minikube/busybox
preload_test.go:61: (dbg) Done: out/minikube-linux-amd64 ssh -p test-preload-20220601104321-6708 -- sudo crictl pull gcr.io/k8s-minikube/busybox: (1.954406384s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-20220601104321-6708 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd --kubernetes-version=v1.17.3
preload_test.go:71: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-20220601104321-6708 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd --kubernetes-version=v1.17.3: (1m40.313677302s)
preload_test.go:80: (dbg) Run:  out/minikube-linux-amd64 ssh -p test-preload-20220601104321-6708 -- sudo crictl image ls
helpers_test.go:175: Cleaning up "test-preload-20220601104321-6708" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-20220601104321-6708
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-20220601104321-6708: (2.418485772s)
--- PASS: TestPreload (190.36s)

                                                
                                    
x
+
TestScheduledStopUnix (108.7s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-20220601104632-6708 --memory=2048 --driver=docker  --container-runtime=containerd
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-20220601104632-6708 --memory=2048 --driver=docker  --container-runtime=containerd: (31.907099526s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-20220601104632-6708 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-20220601104632-6708 -n scheduled-stop-20220601104632-6708
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-20220601104632-6708 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-20220601104632-6708 --cancel-scheduled
E0601 10:47:12.929051    6708 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/addons-20220601102024-6708/client.crt: no such file or directory
E0601 10:47:21.870505    6708 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/functional-20220601102456-6708/client.crt: no such file or directory
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-20220601104632-6708 -n scheduled-stop-20220601104632-6708
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-20220601104632-6708
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-20220601104632-6708 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-20220601104632-6708
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-20220601104632-6708: exit status 7 (89.638245ms)

                                                
                                                
-- stdout --
	scheduled-stop-20220601104632-6708
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-20220601104632-6708 -n scheduled-stop-20220601104632-6708
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-20220601104632-6708 -n scheduled-stop-20220601104632-6708: exit status 7 (88.041231ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-20220601104632-6708" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-20220601104632-6708
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p scheduled-stop-20220601104632-6708: (5.116340699s)
--- PASS: TestScheduledStopUnix (108.70s)

                                                
                                    
x
+
TestInsufficientStorage (16.81s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-amd64 start -p insufficient-storage-20220601104820-6708 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=containerd
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p insufficient-storage-20220601104820-6708 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=containerd: exit status 26 (10.094807779s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"56f0e99e-f733-4c06-a035-6b33a53e0d6c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-20220601104820-6708] minikube v1.26.0-beta.1 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"8d1ea207-396c-4e11-8715-7bcabb33c192","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=14079"}}
	{"specversion":"1.0","id":"8687a0c5-3293-41c0-9e04-99a8c4fc4097","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"410d2c4d-e9a8-4435-8136-a738bf9b4751","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig"}}
	{"specversion":"1.0","id":"bd3c0a70-9db3-4c31-af9b-d698ec14bcaa","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube"}}
	{"specversion":"1.0","id":"b78acb45-8e92-4403-ba8c-c1dde6686165","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"560a18a4-2d94-47af-a496-055dc6a94d26","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"6fc2e739-a03c-4eaf-b395-e5610f1c26c7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"67556517-b405-4536-bf75-918016f7de1d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"c01caa0c-2772-41d0-9dca-5c0b38681b3a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with the root privilege"}}
	{"specversion":"1.0","id":"233406ba-1bc8-47c8-a677-ad5c1c563ca6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting control plane node insufficient-storage-20220601104820-6708 in cluster insufficient-storage-20220601104820-6708","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"32dedd62-1f9d-475b-bf46-5fc048f6f47e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"861c123f-153b-4ba5-bdcc-8b88f665bbcd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"60854bd5-2b06-4425-bf25-b38734cdb2a0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100%% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-20220601104820-6708 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-20220601104820-6708 --output=json --layout=cluster: exit status 7 (344.908221ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-20220601104820-6708","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.26.0-beta.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-20220601104820-6708","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0601 10:48:31.177920  126190 status.go:413] kubeconfig endpoint: extract IP: "insufficient-storage-20220601104820-6708" does not appear in /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-20220601104820-6708 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-20220601104820-6708 --output=json --layout=cluster: exit status 7 (341.440242ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-20220601104820-6708","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.26.0-beta.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-20220601104820-6708","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0601 10:48:31.519818  126301 status.go:413] kubeconfig endpoint: extract IP: "insufficient-storage-20220601104820-6708" does not appear in /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig
	E0601 10:48:31.527989  126301 status.go:557] unable to read event log: stat: stat /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/insufficient-storage-20220601104820-6708/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-20220601104820-6708" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p insufficient-storage-20220601104820-6708
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p insufficient-storage-20220601104820-6708: (6.024400822s)
--- PASS: TestInsufficientStorage (16.81s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (284.99s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:127: (dbg) Run:  /tmp/minikube-v1.16.0.2517432619.exe start -p running-upgrade-20220601105304-6708 --memory=2200 --vm-driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:127: (dbg) Done: /tmp/minikube-v1.16.0.2517432619.exe start -p running-upgrade-20220601105304-6708 --memory=2200 --vm-driver=docker  --container-runtime=containerd: (3m59.951300233s)
version_upgrade_test.go:137: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-20220601105304-6708 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
E0601 10:57:12.928951    6708 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/addons-20220601102024-6708/client.crt: no such file or directory
E0601 10:57:21.870200    6708 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/functional-20220601102456-6708/client.crt: no such file or directory

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:137: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-20220601105304-6708 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (37.358550069s)
helpers_test.go:175: Cleaning up "running-upgrade-20220601105304-6708" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-20220601105304-6708
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-20220601105304-6708: (7.187733483s)
--- PASS: TestRunningBinaryUpgrade (284.99s)

                                                
                                    
x
+
TestKubernetesUpgrade (128.78s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:229: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-20220601105039-6708 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
E0601 10:50:54.132533    6708 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/ingress-addon-legacy-20220601102806-6708/client.crt: no such file or directory
version_upgrade_test.go:229: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-20220601105039-6708 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (40.773310786s)
version_upgrade_test.go:234: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-20220601105039-6708
version_upgrade_test.go:234: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-20220601105039-6708: (1.277119854s)
version_upgrade_test.go:239: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-20220601105039-6708 status --format={{.Host}}
version_upgrade_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-20220601105039-6708 status --format={{.Host}}: exit status 7 (89.435708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:241: status error: exit status 7 (may be ok)
version_upgrade_test.go:250: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-20220601105039-6708 --memory=2200 --kubernetes-version=v1.23.6 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:250: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-20220601105039-6708 --memory=2200 --kubernetes-version=v1.23.6 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (50.492773192s)
version_upgrade_test.go:255: (dbg) Run:  kubectl --context kubernetes-upgrade-20220601105039-6708 version --output=json
version_upgrade_test.go:274: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:276: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-20220601105039-6708 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:276: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-20220601105039-6708 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker  --container-runtime=containerd: exit status 106 (88.226351ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-20220601105039-6708] minikube v1.26.0-beta.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=14079
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.23.6 cluster to v1.16.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.16.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-20220601105039-6708
	    minikube start -p kubernetes-upgrade-20220601105039-6708 --kubernetes-version=v1.16.0
	    
	    2) Create a second cluster with Kubernetes 1.16.0, by running:
	    
	    minikube start -p kubernetes-upgrade-20220601105039-67082 --kubernetes-version=v1.16.0
	    
	    3) Use the existing cluster at version Kubernetes 1.23.6, by running:
	    
	    minikube start -p kubernetes-upgrade-20220601105039-6708 --kubernetes-version=v1.23.6
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:280: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:282: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-20220601105039-6708 --memory=2200 --kubernetes-version=v1.23.6 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
E0601 10:52:12.928457    6708 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/addons-20220601102024-6708/client.crt: no such file or directory
E0601 10:52:21.870024    6708 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/functional-20220601102456-6708/client.crt: no such file or directory
version_upgrade_test.go:282: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-20220601105039-6708 --memory=2200 --kubernetes-version=v1.23.6 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (32.969947799s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-20220601105039-6708" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-20220601105039-6708
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-20220601105039-6708: (3.032592412s)
--- PASS: TestKubernetesUpgrade (128.78s)

                                                
                                    
x
+
TestMissingContainerUpgrade (119.58s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:316: (dbg) Run:  /tmp/minikube-v1.9.1.583214644.exe start -p missing-upgrade-20220601104839-6708 --memory=2200 --driver=docker  --container-runtime=containerd
E0601 10:48:44.946971    6708 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/functional-20220601102456-6708/client.crt: no such file or directory
E0601 10:49:31.088060    6708 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/ingress-addon-legacy-20220601102806-6708/client.crt: no such file or directory
version_upgrade_test.go:316: (dbg) Done: /tmp/minikube-v1.9.1.583214644.exe start -p missing-upgrade-20220601104839-6708 --memory=2200 --driver=docker  --container-runtime=containerd: (1m6.546062923s)
version_upgrade_test.go:325: (dbg) Run:  docker stop missing-upgrade-20220601104839-6708
version_upgrade_test.go:325: (dbg) Done: docker stop missing-upgrade-20220601104839-6708: (10.2918868s)
version_upgrade_test.go:330: (dbg) Run:  docker rm missing-upgrade-20220601104839-6708
version_upgrade_test.go:336: (dbg) Run:  out/minikube-linux-amd64 start -p missing-upgrade-20220601104839-6708 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:336: (dbg) Done: out/minikube-linux-amd64 start -p missing-upgrade-20220601104839-6708 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (38.907039728s)
helpers_test.go:175: Cleaning up "missing-upgrade-20220601104839-6708" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p missing-upgrade-20220601104839-6708
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p missing-upgrade-20220601104839-6708: (3.274650952s)
--- PASS: TestMissingContainerUpgrade (119.58s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.11s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion

                                                
                                                
=== CONT  TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-20220601104837-6708 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-20220601104837-6708 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=containerd: exit status 14 (108.682403ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-20220601104837-6708] minikube v1.26.0-beta.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=14079
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.11s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (255.44s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-20220601104837-6708 --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-20220601104837-6708 --driver=docker  --container-runtime=containerd: (4m14.929727182s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-20220601104837-6708 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (255.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (1.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:220: (dbg) Run:  out/minikube-linux-amd64 start -p false-20220601104838-6708 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd
net_test.go:220: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-20220601104838-6708 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd: exit status 14 (293.721813ms)

                                                
                                                
-- stdout --
	* [false-20220601104838-6708] minikube v1.26.0-beta.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=14079
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0601 10:48:38.182610  127317 out.go:296] Setting OutFile to fd 1 ...
	I0601 10:48:38.182756  127317 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0601 10:48:38.182777  127317 out.go:309] Setting ErrFile to fd 2...
	I0601 10:48:38.182787  127317 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0601 10:48:38.182931  127317 root.go:322] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/bin
	I0601 10:48:38.183306  127317 out.go:303] Setting JSON to false
	I0601 10:48:38.184399  127317 start.go:115] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":1873,"bootTime":1654078646,"procs":299,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.13.0-1027-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0601 10:48:38.184460  127317 start.go:125] virtualization: kvm guest
	I0601 10:48:38.188358  127317 out.go:177] * [false-20220601104838-6708] minikube v1.26.0-beta.1 on Ubuntu 20.04 (kvm/amd64)
	I0601 10:48:38.190079  127317 out.go:177]   - MINIKUBE_LOCATION=14079
	I0601 10:48:38.191654  127317 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0601 10:48:38.191771  127317 notify.go:193] Checking for updates...
	I0601 10:48:38.193312  127317 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig
	I0601 10:48:38.195027  127317 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube
	I0601 10:48:38.196504  127317 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0601 10:48:38.198407  127317 config.go:178] Loaded profile config "NoKubernetes-20220601104837-6708": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.6
	I0601 10:48:38.198576  127317 config.go:178] Loaded profile config "force-systemd-env-20220601104837-6708": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.6
	I0601 10:48:38.198754  127317 config.go:178] Loaded profile config "offline-containerd-20220601104837-6708": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.6
	I0601 10:48:38.198847  127317 driver.go:358] Setting default libvirt URI to qemu:///system
	I0601 10:48:38.247982  127317 docker.go:137] docker version: linux-20.10.16
	I0601 10:48:38.248076  127317 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0601 10:48:38.382830  127317 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:76 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:26 OomKillDisable:true NGoroutines:36 SystemTime:2022-06-01 10:48:38.282864698 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1027-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662795776 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:20.10.16 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0601 10:48:38.382986  127317 docker.go:254] overlay module found
	I0601 10:48:38.386016  127317 out.go:177] * Using the docker driver based on user configuration
	I0601 10:48:38.387624  127317 start.go:284] selected driver: docker
	I0601 10:48:38.387644  127317 start.go:806] validating driver "docker" against <nil>
	I0601 10:48:38.387665  127317 start.go:817] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0601 10:48:38.390272  127317 out.go:177] 
	W0601 10:48:38.392104  127317 out.go:239] X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	I0601 10:48:38.393659  127317 out.go:177] 

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "false-20220601104838-6708" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-20220601104838-6708
--- PASS: TestNetworkPlugins/group/false (1.26s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.49s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.49s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (104s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:190: (dbg) Run:  /tmp/minikube-v1.16.0.2285374054.exe start -p stopped-upgrade-20220601105247-6708 --memory=2200 --vm-driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:190: (dbg) Done: /tmp/minikube-v1.16.0.2285374054.exe start -p stopped-upgrade-20220601105247-6708 --memory=2200 --vm-driver=docker  --container-runtime=containerd: (48.680241684s)
version_upgrade_test.go:199: (dbg) Run:  /tmp/minikube-v1.16.0.2285374054.exe -p stopped-upgrade-20220601105247-6708 stop
version_upgrade_test.go:199: (dbg) Done: /tmp/minikube-v1.16.0.2285374054.exe -p stopped-upgrade-20220601105247-6708 stop: (1.261869011s)
version_upgrade_test.go:205: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-20220601105247-6708 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:205: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-20220601105247-6708 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (54.058169584s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (104.00s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (19.5s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-20220601104837-6708 --no-kubernetes --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-20220601104837-6708 --no-kubernetes --driver=docker  --container-runtime=containerd: (14.617741319s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-20220601104837-6708 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-20220601104837-6708 status -o json: exit status 2 (439.330313ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-20220601104837-6708","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-20220601104837-6708
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-20220601104837-6708: (4.44346256s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (19.50s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (4.9s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-20220601104837-6708 --no-kubernetes --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-20220601104837-6708 --no-kubernetes --driver=docker  --container-runtime=containerd: (4.898798678s)
--- PASS: TestNoKubernetes/serial/Start (4.90s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.38s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-20220601104837-6708 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-20220601104837-6708 "sudo systemctl is-active --quiet service kubelet": exit status 1 (378.720938ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.38s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.56s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.56s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (2.97s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-20220601104837-6708

                                                
                                                
=== CONT  TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-20220601104837-6708: (2.972046105s)
--- PASS: TestNoKubernetes/serial/Stop (2.97s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (5.68s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-20220601104837-6708 --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-20220601104837-6708 --driver=docker  --container-runtime=containerd: (5.683796546s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (5.68s)

                                                
                                    
x
+
TestPause/serial/Start (48.03s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-20220601105324-6708 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestPause/serial/Start
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-20220601105324-6708 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd: (48.034379663s)
--- PASS: TestPause/serial/Start (48.03s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.38s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-20220601104837-6708 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-20220601104837-6708 "sudo systemctl is-active --quiet service kubelet": exit status 1 (381.102547ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.38s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (16.04s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-20220601105324-6708 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-20220601105324-6708 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (16.026094152s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (16.04s)

                                                
                                    
x
+
TestPause/serial/Pause (0.71s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-20220601105324-6708 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.71s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.41s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-20220601105324-6708 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-20220601105324-6708 --output=json --layout=cluster: exit status 2 (406.474741ms)

                                                
                                                
-- stdout --
	{"Name":"pause-20220601105324-6708","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 8 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.26.0-beta.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-20220601105324-6708","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.41s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.64s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-20220601105324-6708 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.64s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (5.35s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-20220601105324-6708 --alsologtostderr -v=5
E0601 10:54:31.086670    6708 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/ingress-addon-legacy-20220601102806-6708/client.crt: no such file or directory

                                                
                                                
=== CONT  TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Done: out/minikube-linux-amd64 pause -p pause-20220601105324-6708 --alsologtostderr -v=5: (5.348873479s)
--- PASS: TestPause/serial/PauseAgain (5.35s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.82s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:213: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-20220601105247-6708
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.82s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (7.55s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-20220601105324-6708 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p pause-20220601105324-6708 --alsologtostderr -v=5: (7.554591622s)
--- PASS: TestPause/serial/DeletePaused (7.55s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.6s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-20220601105324-6708
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-20220601105324-6708: exit status 1 (31.081066ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such volume: pause-20220601105324-6708

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (0.60s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (60.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-20220601104838-6708 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=kindnet --driver=docker  --container-runtime=containerd
net_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-20220601104838-6708 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=kindnet --driver=docker  --container-runtime=containerd: (1m0.153173826s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (60.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (5.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:109: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:342: "kindnet-svdp7" [a96734ec-16f0-41eb-b073-89048bbfb108] Running
net_test.go:109: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 5.012437415s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (5.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:122: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-20220601104838-6708 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (10.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:138: (dbg) Run:  kubectl --context kindnet-20220601104838-6708 replace --force -f testdata/netcat-deployment.yaml
net_test.go:152: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:342: "netcat-668db85669-9mnwp" [640672b8-06cd-499f-b6ea-157bb21987e6] Pending
helpers_test.go:342: "netcat-668db85669-9mnwp" [640672b8-06cd-499f-b6ea-157bb21987e6] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:342: "netcat-668db85669-9mnwp" [640672b8-06cd-499f-b6ea-157bb21987e6] Running
net_test.go:152: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 10.007748048s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (10.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:169: (dbg) Run:  kubectl --context kindnet-20220601104838-6708 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:188: (dbg) Run:  kubectl --context kindnet-20220601104838-6708 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:238: (dbg) Run:  kubectl --context kindnet-20220601104838-6708 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (70.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-20220601104837-6708 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --enable-default-cni=true --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-20220601104837-6708 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --enable-default-cni=true --driver=docker  --container-runtime=containerd: (1m10.277971031s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (70.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (52.56s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-20220601104837-6708 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=bridge --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/Start
net_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p bridge-20220601104837-6708 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=bridge --driver=docker  --container-runtime=containerd: (52.564654759s)
--- PASS: TestNetworkPlugins/group/bridge/Start (52.56s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (79.85s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p calico-20220601104839-6708 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=calico --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestNetworkPlugins/group/calico/Start
net_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p calico-20220601104839-6708 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=calico --driver=docker  --container-runtime=containerd: (1m19.852798845s)
--- PASS: TestNetworkPlugins/group/calico/Start (79.85s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.61s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:122: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-20220601104837-6708 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.61s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (13.82s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:138: (dbg) Run:  kubectl --context enable-default-cni-20220601104837-6708 replace --force -f testdata/netcat-deployment.yaml
net_test.go:152: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:342: "netcat-668db85669-fgjks" [61724aef-719e-4d83-8b9b-040d28bf489f] Pending
helpers_test.go:342: "netcat-668db85669-fgjks" [61724aef-719e-4d83-8b9b-040d28bf489f] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:342: "netcat-668db85669-fgjks" [61724aef-719e-4d83-8b9b-040d28bf489f] Running
net_test.go:152: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 13.006540911s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (13.82s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:169: (dbg) Run:  kubectl --context enable-default-cni-20220601104837-6708 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:188: (dbg) Run:  kubectl --context enable-default-cni-20220601104837-6708 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:238: (dbg) Run:  kubectl --context enable-default-cni-20220601104837-6708 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/Start (71.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/Start
net_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p cilium-20220601104839-6708 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=cilium --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestNetworkPlugins/group/cilium/Start
net_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p cilium-20220601104839-6708 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=cilium --driver=docker  --container-runtime=containerd: (1m11.32285583s)
--- PASS: TestNetworkPlugins/group/cilium/Start (71.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.45s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:122: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-20220601104837-6708 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.45s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (9.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:138: (dbg) Run:  kubectl --context bridge-20220601104837-6708 replace --force -f testdata/netcat-deployment.yaml
net_test.go:152: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:342: "netcat-668db85669-wlznz" [5980f1ba-abab-49a3-88d9-af4e7bc3792d] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:342: "netcat-668db85669-wlznz" [5980f1ba-abab-49a3-88d9-af4e7bc3792d] Running
net_test.go:152: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 9.006077916s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (9.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:169: (dbg) Run:  kubectl --context bridge-20220601104837-6708 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:188: (dbg) Run:  kubectl --context bridge-20220601104837-6708 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:238: (dbg) Run:  kubectl --context bridge-20220601104837-6708 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (5.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:109: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:342: "calico-node-2r6r9" [fa30a767-a3b0-46bc-8389-4eea0afec174] Running
net_test.go:109: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 5.015124218s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (5.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:122: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-20220601104839-6708 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/ControllerPod (5.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/ControllerPod
net_test.go:109: (dbg) TestNetworkPlugins/group/cilium/ControllerPod: waiting 10m0s for pods matching "k8s-app=cilium" in namespace "kube-system" ...
helpers_test.go:342: "cilium-27mqw" [3e643cb0-5474-48e4-b348-17241aa88b3d] Running
net_test.go:109: (dbg) TestNetworkPlugins/group/cilium/ControllerPod: k8s-app=cilium healthy within 5.013515234s
--- PASS: TestNetworkPlugins/group/cilium/ControllerPod (5.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/KubeletFlags (0.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/KubeletFlags
net_test.go:122: (dbg) Run:  out/minikube-linux-amd64 ssh -p cilium-20220601104839-6708 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/cilium/KubeletFlags (0.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/NetCatPod (8.8s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/NetCatPod
net_test.go:138: (dbg) Run:  kubectl --context cilium-20220601104839-6708 replace --force -f testdata/netcat-deployment.yaml
net_test.go:152: (dbg) TestNetworkPlugins/group/cilium/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:342: "netcat-668db85669-24gtf" [665b7469-dc28-4898-94a6-5cd962e77957] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0601 10:59:31.086865    6708 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/ingress-addon-legacy-20220601102806-6708/client.crt: no such file or directory
helpers_test.go:342: "netcat-668db85669-24gtf" [665b7469-dc28-4898-94a6-5cd962e77957] Running
net_test.go:152: (dbg) TestNetworkPlugins/group/cilium/NetCatPod: app=netcat healthy within 8.006299874s
--- PASS: TestNetworkPlugins/group/cilium/NetCatPod (8.80s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/DNS
net_test.go:169: (dbg) Run:  kubectl --context cilium-20220601104839-6708 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/cilium/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/Localhost
net_test.go:188: (dbg) Run:  kubectl --context cilium-20220601104839-6708 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/cilium/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/HairPin
net_test.go:238: (dbg) Run:  kubectl --context cilium-20220601104839-6708 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/cilium/HairPin (0.17s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (60.67s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:188: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-20220601105939-6708 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.23.6
E0601 11:00:15.974529    6708 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/addons-20220601102024-6708/client.crt: no such file or directory
start_stop_delete_test.go:188: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-20220601105939-6708 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.23.6: (1m0.671633849s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (60.67s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (9.3s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:198: (dbg) Run:  kubectl --context no-preload-20220601105939-6708 create -f testdata/busybox.yaml
start_stop_delete_test.go:198: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:342: "busybox" [3655e985-5c7a-4626-ba04-ef6beb5a67f2] Pending
helpers_test.go:342: "busybox" [3655e985-5c7a-4626-ba04-ef6beb5a67f2] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:342: "busybox" [3655e985-5c7a-4626-ba04-ef6beb5a67f2] Running
start_stop_delete_test.go:198: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.011069723s
start_stop_delete_test.go:198: (dbg) Run:  kubectl --context no-preload-20220601105939-6708 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (9.30s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.57s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:207: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-20220601105939-6708 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:217: (dbg) Run:  kubectl --context no-preload-20220601105939-6708 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.57s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (20.14s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:230: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-20220601105939-6708 --alsologtostderr -v=3
start_stop_delete_test.go:230: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-20220601105939-6708 --alsologtostderr -v=3: (20.138488607s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (20.14s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:241: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-20220601105939-6708 -n no-preload-20220601105939-6708
start_stop_delete_test.go:241: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-20220601105939-6708 -n no-preload-20220601105939-6708: exit status 7 (96.932432ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:241: status error: exit status 7 (may be ok)
start_stop_delete_test.go:248: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-20220601105939-6708 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (323.67s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:258: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-20220601105939-6708 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.23.6
E0601 11:01:21.551963    6708 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/kindnet-20220601104838-6708/client.crt: no such file or directory
E0601 11:01:21.557213    6708 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/kindnet-20220601104838-6708/client.crt: no such file or directory
E0601 11:01:21.567447    6708 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/kindnet-20220601104838-6708/client.crt: no such file or directory
E0601 11:01:21.587683    6708 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/kindnet-20220601104838-6708/client.crt: no such file or directory
E0601 11:01:21.627926    6708 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/kindnet-20220601104838-6708/client.crt: no such file or directory
E0601 11:01:21.708204    6708 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/kindnet-20220601104838-6708/client.crt: no such file or directory
E0601 11:01:21.868458    6708 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/kindnet-20220601104838-6708/client.crt: no such file or directory
E0601 11:01:22.188881    6708 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/kindnet-20220601104838-6708/client.crt: no such file or directory
E0601 11:01:22.829789    6708 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/kindnet-20220601104838-6708/client.crt: no such file or directory
E0601 11:01:24.110069    6708 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/kindnet-20220601104838-6708/client.crt: no such file or directory
E0601 11:01:26.670812    6708 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/kindnet-20220601104838-6708/client.crt: no such file or directory
E0601 11:01:31.791598    6708 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/kindnet-20220601104838-6708/client.crt: no such file or directory
E0601 11:01:42.032772    6708 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/kindnet-20220601104838-6708/client.crt: no such file or directory
E0601 11:02:02.513468    6708 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/kindnet-20220601104838-6708/client.crt: no such file or directory
E0601 11:02:12.929100    6708 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/addons-20220601102024-6708/client.crt: no such file or directory
E0601 11:02:21.870797    6708 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/functional-20220601102456-6708/client.crt: no such file or directory
E0601 11:02:43.474094    6708 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/kindnet-20220601104838-6708/client.crt: no such file or directory
E0601 11:02:54.652179    6708 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/enable-default-cni-20220601104837-6708/client.crt: no such file or directory
E0601 11:02:54.657430    6708 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/enable-default-cni-20220601104837-6708/client.crt: no such file or directory
E0601 11:02:54.667669    6708 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/enable-default-cni-20220601104837-6708/client.crt: no such file or directory
E0601 11:02:54.687936    6708 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/enable-default-cni-20220601104837-6708/client.crt: no such file or directory
E0601 11:02:54.728160    6708 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/enable-default-cni-20220601104837-6708/client.crt: no such file or directory
E0601 11:02:54.808429    6708 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/enable-default-cni-20220601104837-6708/client.crt: no such file or directory
E0601 11:02:54.968888    6708 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/enable-default-cni-20220601104837-6708/client.crt: no such file or directory
E0601 11:02:55.289635    6708 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/enable-default-cni-20220601104837-6708/client.crt: no such file or directory
E0601 11:02:55.930701    6708 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/enable-default-cni-20220601104837-6708/client.crt: no such file or directory
E0601 11:02:57.211333    6708 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/enable-default-cni-20220601104837-6708/client.crt: no such file or directory
E0601 11:02:59.771754    6708 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/enable-default-cni-20220601104837-6708/client.crt: no such file or directory
E0601 11:03:04.892677    6708 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/enable-default-cni-20220601104837-6708/client.crt: no such file or directory
E0601 11:03:15.133490    6708 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/enable-default-cni-20220601104837-6708/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:258: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-20220601105939-6708 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.23.6: (5m23.208332052s)
start_stop_delete_test.go:264: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-20220601105939-6708 -n no-preload-20220601105939-6708
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (323.67s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (8.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:276: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:342: "kubernetes-dashboard-8469778f77-7nmsk" [e1c18bca-fab9-46a0-941e-87b3a9be0cd8] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:342: "kubernetes-dashboard-8469778f77-7nmsk" [e1c18bca-fab9-46a0-941e-87b3a9be0cd8] Running
start_stop_delete_test.go:276: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 8.012894237s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (8.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.06s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:289: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:342: "kubernetes-dashboard-8469778f77-7nmsk" [e1c18bca-fab9-46a0-941e-87b3a9be0cd8] Running
start_stop_delete_test.go:289: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.005628184s
start_stop_delete_test.go:293: (dbg) Run:  kubectl --context no-preload-20220601105939-6708 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.06s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.39s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:306: (dbg) Run:  out/minikube-linux-amd64 ssh -p no-preload-20220601105939-6708 "sudo crictl images -o json"
start_stop_delete_test.go:306: Found non-minikube image: kindest/kindnetd:v20210326-1e038dc5
start_stop_delete_test.go:306: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.39s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.08s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:313: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-20220601105939-6708 --alsologtostderr -v=1
start_stop_delete_test.go:313: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-20220601105939-6708 -n no-preload-20220601105939-6708
start_stop_delete_test.go:313: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-20220601105939-6708 -n no-preload-20220601105939-6708: exit status 2 (386.783805ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:313: status error: exit status 2 (may be ok)
start_stop_delete_test.go:313: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-20220601105939-6708 -n no-preload-20220601105939-6708
start_stop_delete_test.go:313: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-20220601105939-6708 -n no-preload-20220601105939-6708: exit status 2 (387.396532ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:313: status error: exit status 2 (may be ok)
start_stop_delete_test.go:313: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-20220601105939-6708 --alsologtostderr -v=1
E0601 11:06:49.235896    6708 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/kindnet-20220601104838-6708/client.crt: no such file or directory
start_stop_delete_test.go:313: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-20220601105939-6708 -n no-preload-20220601105939-6708
start_stop_delete_test.go:313: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-20220601105939-6708 -n no-preload-20220601105939-6708
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.08s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.6s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:207: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-20220601105850-6708 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:217: (dbg) Run:  kubectl --context old-k8s-version-20220601105850-6708 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.60s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (1.3s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:230: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-20220601105850-6708 --alsologtostderr -v=3
start_stop_delete_test.go:230: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-20220601105850-6708 --alsologtostderr -v=3: (1.303283561s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (1.30s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:241: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-20220601105850-6708 -n old-k8s-version-20220601105850-6708
start_stop_delete_test.go:241: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-20220601105850-6708 -n old-k8s-version-20220601105850-6708: exit status 7 (97.264322ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:241: status error: exit status 7 (may be ok)
start_stop_delete_test.go:248: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-20220601105850-6708 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (42.57s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:188: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-20220601111420-6708 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubelet.network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.23.6
E0601 11:14:22.035127    6708 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/cilium-20220601104839-6708/client.crt: no such file or directory
E0601 11:14:31.086483    6708 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/ingress-addon-legacy-20220601102806-6708/client.crt: no such file or directory
start_stop_delete_test.go:188: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-20220601111420-6708 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubelet.network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.23.6: (42.574696761s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (42.57s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.48s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:207: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-20220601111420-6708 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.48s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (20.09s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:230: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-20220601111420-6708 --alsologtostderr -v=3
start_stop_delete_test.go:230: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-20220601111420-6708 --alsologtostderr -v=3: (20.088248913s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (20.09s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:241: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-20220601111420-6708 -n newest-cni-20220601111420-6708
start_stop_delete_test.go:241: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-20220601111420-6708 -n newest-cni-20220601111420-6708: exit status 7 (97.463007ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:241: status error: exit status 7 (may be ok)
start_stop_delete_test.go:248: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-20220601111420-6708 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (34.17s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:258: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-20220601111420-6708 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubelet.network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.23.6
E0601 11:15:40.379382    6708 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/no-preload-20220601105939-6708/client.crt: no such file or directory
start_stop_delete_test.go:258: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-20220601111420-6708 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubelet.network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.23.6: (33.779081577s)
start_stop_delete_test.go:264: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-20220601111420-6708 -n newest-cni-20220601111420-6708
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (34.17s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:286: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.4s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:306: (dbg) Run:  out/minikube-linux-amd64 ssh -p newest-cni-20220601111420-6708 "sudo crictl images -o json"
start_stop_delete_test.go:306: Found non-minikube image: kindest/kindnetd:v20210326-1e038dc5
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.40s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.81s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:313: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-20220601111420-6708 --alsologtostderr -v=1
start_stop_delete_test.go:313: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-20220601111420-6708 -n newest-cni-20220601111420-6708
start_stop_delete_test.go:313: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-20220601111420-6708 -n newest-cni-20220601111420-6708: exit status 2 (379.890066ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:313: status error: exit status 2 (may be ok)
start_stop_delete_test.go:313: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-20220601111420-6708 -n newest-cni-20220601111420-6708
start_stop_delete_test.go:313: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-20220601111420-6708 -n newest-cni-20220601111420-6708: exit status 2 (382.810599ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:313: status error: exit status 2 (may be ok)
start_stop_delete_test.go:313: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-20220601111420-6708 --alsologtostderr -v=1
start_stop_delete_test.go:313: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-20220601111420-6708 -n newest-cni-20220601111420-6708
start_stop_delete_test.go:313: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-20220601111420-6708 -n newest-cni-20220601111420-6708
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.81s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.56s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:207: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-20220601110327-6708 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:217: (dbg) Run:  kubectl --context embed-certs-20220601110327-6708 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.56s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (11.21s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:230: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-20220601110327-6708 --alsologtostderr -v=3
E0601 11:16:21.551974    6708 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/kindnet-20220601104838-6708/client.crt: no such file or directory
start_stop_delete_test.go:230: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-20220601110327-6708 --alsologtostderr -v=3: (11.206260552s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (11.21s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:241: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-20220601110327-6708 -n embed-certs-20220601110327-6708
start_stop_delete_test.go:241: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-20220601110327-6708 -n embed-certs-20220601110327-6708: exit status 7 (95.107117ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:241: status error: exit status 7 (may be ok)
start_stop_delete_test.go:248: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-20220601110327-6708 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/EnableAddonWhileActive (0.57s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:207: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-different-port-20220601110654-6708 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:217: (dbg) Run:  kubectl --context default-k8s-different-port-20220601110654-6708 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-different-port/serial/EnableAddonWhileActive (0.57s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/Stop (10.34s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/Stop
start_stop_delete_test.go:230: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-different-port-20220601110654-6708 --alsologtostderr -v=3
E0601 11:19:50.203290    6708 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3358-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/calico-20220601104839-6708/client.crt: no such file or directory
start_stop_delete_test.go:230: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-different-port-20220601110654-6708 --alsologtostderr -v=3: (10.337299315s)
--- PASS: TestStartStop/group/default-k8s-different-port/serial/Stop (10.34s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:241: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-different-port-20220601110654-6708 -n default-k8s-different-port-20220601110654-6708
start_stop_delete_test.go:241: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-different-port-20220601110654-6708 -n default-k8s-different-port-20220601110654-6708: exit status 7 (92.156762ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:241: status error: exit status 7 (may be ok)
start_stop_delete_test.go:248: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-different-port-20220601110654-6708 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-different-port/serial/EnableAddonAfterStop (0.19s)

                                                
                                    

Test skip (23/267)

x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:121: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:140: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
aaa_download_only_test.go:156: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.23.6/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.23.6/cached-images
aaa_download_only_test.go:121: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.23.6/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.23.6/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.23.6/binaries
aaa_download_only_test.go:140: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.23.6/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.23.6/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.23.6/kubectl
aaa_download_only_test.go:156: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.23.6/kubectl (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:448: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:35: skipping: only runs with docker container runtime, currently testing containerd
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:455: only validate docker env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:542: only validate podman env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:97: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:97: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:97: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Only test none driver.
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing containerd container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet

                                                
                                                
=== CONT  TestNetworkPlugins/group/kubenet
net_test.go:91: Skipping the test as containerd container runtimes requires CNI

                                                
                                                
=== CONT  TestNetworkPlugins/group/kubenet
helpers_test.go:175: Cleaning up "kubenet-20220601104837-6708" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-20220601104837-6708
--- SKIP: TestNetworkPlugins/group/kubenet (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel
net_test.go:79: flannel is not yet compatible with Docker driver: iptables v1.8.3 (legacy): Couldn't load target `CNI-x': No such file or directory
helpers_test.go:175: Cleaning up "flannel-20220601104837-6708" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p flannel-20220601104837-6708
--- SKIP: TestNetworkPlugins/group/flannel (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel
net_test.go:79: flannel is not yet compatible with Docker driver: iptables v1.8.3 (legacy): Couldn't load target `CNI-x': No such file or directory
helpers_test.go:175: Cleaning up "custom-flannel-20220601104839-6708" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p custom-flannel-20220601104839-6708
--- SKIP: TestNetworkPlugins/group/custom-flannel (0.25s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.42s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:105: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-20220601110654-6708" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-20220601110654-6708
--- SKIP: TestStartStop/group/disable-driver-mounts (0.42s)

                                                
                                    
Copied to clipboard